Re: [openstack-dev] [neutron] [lbaas] LBaaS Haproxy performance benchmarking

2015-02-03 Thread Baptiste
On Wed, Feb 4, 2015 at 1:58 AM, Varun Lodaya varun_lod...@symantec.com wrote:
 Hi,

 We were trying to use haproxy as our LBaaS solution on the overlay. Has
 anybody done some baseline benchmarking with LBaaSv1 haproxy solution?

 Also, any recommended tools which we could use to do that?

 Thanks,
 Varun

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi Varun,

large subject :)
any injector could do the trick.
I usually use inject (from HAProxy's author) and httpress.
They can hammer a single URL, but if the purpose is to measure
HAProxy's performance, then this is more than enough.

Baptiste

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Cinder Brick pypi library?

2015-02-03 Thread Walter A. Boring IV

Hey folks,
   I wanted to get some feedback from the Nova folks on using Cinder's 
Brick library.  As some of you
may or may not know, Cinder has an internal module called Brick. It's 
used for discovering and removing
volumes attached to a host.  Most of the code in the Brick module in 
cinder originated from the Nova libvirt
volume drivers that do the same thing (discover attached volumes and 
then later remove them).
Cinder uses the brick library for copy volume to image, as well as copy 
image to volume operations
where the Cinder node needs to attach volumes to itself to do the work.  
The Brick code inside of Cinder has been

used since the Havana release.

  Our plans in Cinder for the Kilo release is to extract the Brick 
module into it's own separate library
that is maintained by the Cinder team as a subproject of Cinder and 
released as a pypi lib.   Then for the L release, refactor
Nova's libvirt volume drivers to use the Brick library.   This will 
enable us to eliminate the duplicate
code between Nova's libvirt volume drivers and Cinder's internal brick 
module.   Both projects can benefit

from a shared library.

So the question I have is, does Nova have an interest in using the code 
in a pypi brick library?  If not, then it doesn't
make any sense for the Cinder team to extract it's brick module into a 
shared (pypi) library.


The first release of brick will only contain the volume discovery and 
removal code.  This is contained in the

initiator directory of cinder/brick/

You can view the current brick code in Cinder here:
https://github.com/openstack/cinder/tree/master/cinder/brick

Thanks for the feedback,
Walt



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate]some qestions about Designate

2015-02-03 Thread wujiangtaoh...@163.com
Thanks , we will try to deploy it manually


 I guess PowerDNS and bind are supported. This link may help you:
 http://docs.openstack.org/developer/designate/getting-started.html
 
 *Best Regards?*
 
 
 *Chao Yan--**My twitter?Andy Yan @yanchao727
https://twitter.com/yanchao727*
 
 
 *My Weibo?http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*
 
 2015-02-03 17:38 GMT+08:00 wujiangtaoh...@163.com wujiangtaoh...@163.com:
 
 Hi , I have some qestions about the project of Designate.

 1?Can Designate be used with openstack icehouse ?  how about Juno or kilo
 ?
 2?I have tried to  deploy Designate using devstack of master branch. but
 only PowerDNS are supported. Can bind9 be supported ?
 3?when deploy designate using devstack? there are some problems: a) i
 can't delete a domain  b) the operating of Designate doesn't  be reflected
 in PowerDNS
 can anyone help me?  for some references  ?

 --
 gentle wu
 ChinaMobile (suzhou) software technology ltd .



gentle wu
ChinaMobile (suzhou) software technology ltd .
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] do we really need project tags in the governance repository?

2015-02-03 Thread Joe Gordon
On Tue, Jan 27, 2015 at 10:15 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Thierry Carrez's message of 2015-01-27 02:46:03 -0800:
  Doug Hellmann wrote:
   On Mon, Jan 26, 2015, at 12:02 PM, Thierry Carrez wrote:
   [...]
   I'm open to alternative suggestions on where the list of tags, their
   definition and the list projects they apply to should live. If you
 don't
   like that being in the governance repository, what would have your
   preference ?
  
   From the very beginning I have taken the position that tags are by
   themselves not sufficiently useful for evaluating projects. If someone
   wants to choose between Ceilometer, Monasca, or StackTach, we're
   unlikely to come up with tags that will let them do that. They need
   in-depth discussions of deployment options, performance
 characteristics,
   and feature trade-offs.
 
  They are still useful to give people a chance to discover that those 3
  are competing in the same space, and potentially get an idea of which
  one (if any) is deployed on more than one public cloud, better
  documented, or security-supported. I agree with you that an
  (opinionated) article comparing those 3 solutions would be a nice thing
  to have, but I'm just saying that basic, clearly-defined reference
  project metadata still has a lot of value, especially as we grow the
  number of projects.
 

 I agree with your statement that summary reference metadata is useful. I
 agree with Doug that it is inappropriate for the TC to assign it.

   That said, I object to only saying this is all information that can
 be
   found elsewhere or should live elsewhere, because that is just
 keeping
   the current situation -- where that information exists somewhere but
   can't be efficiently found by our downstream consumers. We need a
   taxonomy and clear definitions for tags, so that our users can easily
   find, understand and navigate such project metadata.
  
   As someone new to the project, I would not think to look in the
   governance documents for state information about a project. I would
   search for things like install guide openstack or component list
   openstack and expect to find them in the documentation. So I think
   putting the information in those (or similar) places will actually make
   it easier to find for someone that hasn't been involved in the
   discussion of tags and the governance repository.
 
  The idea here is to have the reference information in some
  Gerrit-controlled repository (currently openstack/governance, but I'm
  open to moving this elsewhere), and have that reference information
  consumed by the openstack.org website when you navigate to the
  Software section, to present a browseable/searchable list of projects
  with project metadata. I don't expect anyone to read the YAML file from
  the governance repository. On the other hand, the software section of
  the openstack.org website is by far the most visited page of all our web
  properties, so I expect most people to see that.
 

 Just like we gather docs and specs into single websites, we could also
 gather project metadata. Let the projects set their tags. One thing
 that might make sense for the TC to do is to elevate certain tags to
 a more important status that they _will_ provide guidance on when to
 use. However, the actual project to tag mapping would work quite well
 as a single file in whatever repository the project team thinks would
 be the best starting point for a new user.


One way we can implement this is, have the TC manage a library  that
converts a file with tag data into a document, along with a list of default
tags, and each project can import that library and include it in its docs.
This way the TC can suggest tags that make sense, but its up to individual
projects to apply them.

This is similar to what nova is doing with our hypervisor feature
capability matrix in  https://review.openstack.org/#/c/136380/

We convert a config file into
http://docs-draft.openstack.org/80/136380/7/check/gate-nova-docs/28be8b3//doc/build/html/support-matrix.html




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-02-03 Thread Aaron Rosen
I believe I was the one who changed the default value of this. When we
upgraded our internal cloud ~6k networks back then from folsom to grizzly
we didn't account that if the dhcp-agents went offline that instances would
give up their lease and unconfigure themselves causing an outage. Setting a
larger value for this helps to avoid this downtime (as Brian pointed out as
well). Personally, I wouldn't really expect my instance to automatically
change it's ip  - I think requiring the user to reboot the instance or use
the console to correct the ip should be good enough. Especially since this
will help buy you shorter down time if an agent fails for a little while
which is probably more important than having the instance change it's ip.

Aaron

On Tue, Feb 3, 2015 at 5:25 PM, Kevin Benton blak...@gmail.com wrote:

 I definitely understand the use-case of having updatable stuff and I don't
 intend to support any proposals to strip away that functionality. Brian was
 suggesting was to block port IP changes since it depended on DHCP to
 deliver that information to the hosts. I was just pointing out that we
 would need to block any API operations that resulted in different
 information being delivered via DHCP for that approach to make sense.

 On Tue, Feb 3, 2015 at 5:01 PM, Robert Collins robe...@robertcollins.net
 wrote:

 On 3 February 2015 at 00:48, Kevin Benton blak...@gmail.com wrote:
 The only thing this discussion has convinced me of is that allowing
 users
  to change the fixed IP address on a neutron port leads to a bad
  user-experience.
 ...

 Documenting a VM reboot is necessary, or even deprecating this (you
 won't
  like that) are sounding better to me by the minute.
 
  If this is an approach you really want to go with, then we should at
 least
  be consistent and deprecate the extra dhcp options extension (or at
 least
  the ability to update ports' dhcp options). Updating subnet attributes
 like
  gateway_ip, dns_nameserves, and host_routes should be thrown out as
 well.
  All of these things depend on the DHCP server to deliver updated
 information
  and are hindered by renewal times. Why discriminate against IP updates
 on a
  port? A failure to receive many of those other types of changes could
 result
  in just as severe of a connection disruption.

 So the reason we added the extra dhcp options extension was to support
 PXE booting physical machines for Nova baremetal, and then Ironic. It
 wasn't added for end users to use on the port, but as a generic way of
 supporting the specific PXE options needed - and that was done that
 way after discussing w/Neutron devs.

 We update ports for two reasons. Primarily, Ironic is HA and will move
 the TFTPd that boots are happening from if an Ironic node has failed.
 Secondly, because a non uncommon operation on physical machines is to
 replace broken NICs, and forcing a redeploy seemed unreasonable. The
 former case doesn't affect running nodes since its only consulted on
 reboot. The second case is by definition only possible when the NIC in
 question is offline (whether hotplug hardware or not).

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] do we really need project tags in the governance repository?

2015-02-03 Thread Jay Pipes

On 01/27/2015 01:15 PM, Clint Byrum wrote:

Excerpts from Thierry Carrez's message of 2015-01-27 02:46:03 -0800:

Doug Hellmann wrote:

On Mon, Jan 26, 2015, at 12:02 PM, Thierry Carrez wrote:
[...]

I'm open to alternative suggestions on where the list of tags, their
definition and the list projects they apply to should live. If you don't
like that being in the governance repository, what would have your
preference ?


 From the very beginning I have taken the position that tags are by
themselves not sufficiently useful for evaluating projects. If someone
wants to choose between Ceilometer, Monasca, or StackTach, we're
unlikely to come up with tags that will let them do that. They need
in-depth discussions of deployment options, performance characteristics,
and feature trade-offs.


They are still useful to give people a chance to discover that those 3
are competing in the same space, and potentially get an idea of which
one (if any) is deployed on more than one public cloud, better
documented, or security-supported. I agree with you that an
(opinionated) article comparing those 3 solutions would be a nice thing
to have, but I'm just saying that basic, clearly-defined reference
project metadata still has a lot of value, especially as we grow the
number of projects.


I agree with your statement that summary reference metadata is useful. I
agree with Doug that it is inappropriate for the TC to assign it.


As do I. I think we can easily over-think the implementation of this 
ostensibly simple idea.


Originally, I proposed that the tag data be managed by the 
project-config-core team in much the same way that new Gerrit/Jeepyb 
project applications are handled.


Best,
-jay


That said, I object to only saying this is all information that can be
found elsewhere or should live elsewhere, because that is just keeping
the current situation -- where that information exists somewhere but
can't be efficiently found by our downstream consumers. We need a
taxonomy and clear definitions for tags, so that our users can easily
find, understand and navigate such project metadata.


As someone new to the project, I would not think to look in the
governance documents for state information about a project. I would
search for things like install guide openstack or component list
openstack and expect to find them in the documentation. So I think
putting the information in those (or similar) places will actually make
it easier to find for someone that hasn't been involved in the
discussion of tags and the governance repository.


The idea here is to have the reference information in some
Gerrit-controlled repository (currently openstack/governance, but I'm
open to moving this elsewhere), and have that reference information
consumed by the openstack.org website when you navigate to the
Software section, to present a browseable/searchable list of projects
with project metadata. I don't expect anyone to read the YAML file from
the governance repository. On the other hand, the software section of
the openstack.org website is by far the most visited page of all our web
properties, so I expect most people to see that.



Just like we gather docs and specs into single websites, we could also
gather project metadata. Let the projects set their tags. One thing
that might make sense for the TC to do is to elevate certain tags to
a more important status that they _will_ provide guidance on when to
use. However, the actual project to tag mapping would work quite well
as a single file in whatever repository the project team thinks would
be the best starting point for a new user.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-03 Thread Robert Collins
On 30 January 2015 at 09:04, John Dickinson m...@not.mn wrote:
 I think there are two points. First, the original requirement (in the first 
 email on this thread) is not what's wanted:

 ...looking at the response body and HTTP response code an external system 
 can’t understand what exactly went wrong. And parsing of error messages here 
 is not the way we’d like to solve this problem.

 So adding a response body to parse doesn't solve the problem. The request as 
 I read it is to have a set of well-defined error codes to know what happens.

 Second, my response is a little tongue-in-cheek, because I think the IIS 
 response codes are a perfect example of extending a common, well-known 
 protocol with custom extensions that breaks existing clients. I would hate to 
 see us do that.

 So if we can't subtly break http, and we can't have error response documents, 
 then we're left with custom error codes in the particular response-code 
 class. eg 461 SecurityGroupNotFound or 462 InvalidKeyName (from the original 
 examples)

http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-26#section-6

I'm quite certain IIS isn't putting 401.1 in the status code - it
would fail on every client everywhere - I think they may include an
extra header though.

I don't understand your objection to a body: AIUI the original
complaint it was that parsing a free-form text field was bad.
Structured data (e.g. JSON) is a whole different kettle of fish, as it
could have a freeform field but also a machine understood field (be
that numeric, an enumeration or whatever).

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [ML2] [arp] [l2pop] arp responding for vlan network

2015-02-03 Thread henry hly
Hi ML2'ers,

We encounter use case of large amount of vlan network deployment, and
want to reduce ARP storm by local responding.

Luckily from Icehouse arp local response is implemented, however vlan
is missed for l2pop. Then came this BP[1], which implement the plugin
support of l2pop for configurable network types, and the ofagent vlan
l2pop.

Now I find proposal for ovs vlan support for l2pop [2], it's very
small and was submitted as a bugfix, so I want to know is it possible
to be merged in the K cycle?

Best regards
Henry

[1] https://review.openstack.org/#/c/112947/
[2] https://bugs.launchpad.net/neutron/+bug/1413056

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Resource CREATE failed with TypeError

2015-02-03 Thread Zhou, Zhenzan
Hi, Experts

I am writing a template to start a multi node devstack cloud inside overcloud. 
Heat Engine got exception after started the first controller VM. I am using the 
latest Heat code.
Here is the stack trace:

Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 ERROR oslo_messaging.rpc.dispatcher 
[req-f8fbb6d4-924d-4c0c-8b60-16ed30358765 ] Exception during message handling: 
object of type 'NoneType' has no len()
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/venvs/heat/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py,
 line 142, in _dispatch_and_reply
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/venvs/heat/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py,
 line 186, in _dispatch
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/venvs/heat/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py,
 line 130, in _do_dispatch
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/venvs/heat/local/lib/python2.7/site-packages/osprofiler/profiler.py,
 line 105, in wrapper
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/venvs/heat/local/lib/python2.7/site-packages/heat/engine/service.py,
 line 74, in wrapped
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher return 
func(self, ctx, *args, **kwargs)
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/venvs/heat/local/lib/python2.7/site-packages/heat/engine/service.py,
 line 1386, in show_software_config
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher sc = 
db_api.software_config_get(cnxt, config_id)
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/venvs/heat/local/lib/python2.7/site-packages/heat/db/api.py, line 
258, in software_config_get
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher return 
IMPL.software_config_get(context, config_id)
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/venvs/heat/local/lib/python2.7/site-packages/heat/db/sqlalchemy/api.py,
 line 717, in software_config_get
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher result = 
model_query(context, models.SoftwareConfig).get(config_id)
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/venvs/heat/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 line 818, in get
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher if 
len(ident) != len(mapper.primary_key):
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher TypeError: 
object of type 'NoneType' has no len()
Feb  4 15:10:21 minicloud-allinone-controller0-i7bnc6baumzl heat-engine: 
2015-02-04 15:10:21.733 138441 TRACE oslo_messaging.rpc.dispatcher
Feb  4 

Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-02-03 Thread Angus Lees
There's clearly not going to be any amount of time that satisfies both
concerns here.

Just to get some other options on the table, here's some things that would
allow a non-zero dhcp lease timeout _and_ address Kevin's original bug
report:

- Just don't allow users to change their IPs without a reboot.

- Bounce the link under the VM when the IP is changed, to force the guest
to re-request a DHCP lease immediately.

- Remove the IP spoofing firewall feature  (- my favourite, for what it's
worth. I've never liked presenting a layer2 abstraction but then forcing
specific layer3 addressing choices by default)

- Make the IP spoofing firewall allow an overlap of both old and new
addresses until the DHCP lease time is up (or the instance reboots).  Adds
some additional async tasks, but this is clearly the required solution if
we want to keep all our existing features.

On Wed Feb 04 2015 at 4:28:11 PM Aaron Rosen aaronoro...@gmail.com wrote:

 I believe I was the one who changed the default value of this. When we
 upgraded our internal cloud ~6k networks back then from folsom to grizzly
 we didn't account that if the dhcp-agents went offline that instances would
 give up their lease and unconfigure themselves causing an outage. Setting a
 larger value for this helps to avoid this downtime (as Brian pointed out as
 well). Personally, I wouldn't really expect my instance to automatically
 change it's ip  - I think requiring the user to reboot the instance or use
 the console to correct the ip should be good enough. Especially since this
 will help buy you shorter down time if an agent fails for a little while
 which is probably more important than having the instance change it's ip.

 Aaron

 On Tue, Feb 3, 2015 at 5:25 PM, Kevin Benton blak...@gmail.com wrote:

 I definitely understand the use-case of having updatable stuff and I
 don't intend to support any proposals to strip away that functionality.
 Brian was suggesting was to block port IP changes since it depended on DHCP
 to deliver that information to the hosts. I was just pointing out that we
 would need to block any API operations that resulted in different
 information being delivered via DHCP for that approach to make sense.

 On Tue, Feb 3, 2015 at 5:01 PM, Robert Collins robe...@robertcollins.net
  wrote:

 On 3 February 2015 at 00:48, Kevin Benton blak...@gmail.com wrote:
 The only thing this discussion has convinced me of is that allowing
 users
  to change the fixed IP address on a neutron port leads to a bad
  user-experience.
 ...

 Documenting a VM reboot is necessary, or even deprecating this (you
 won't
  like that) are sounding better to me by the minute.
 
  If this is an approach you really want to go with, then we should at
 least
  be consistent and deprecate the extra dhcp options extension (or at
 least
  the ability to update ports' dhcp options). Updating subnet attributes
 like
  gateway_ip, dns_nameserves, and host_routes should be thrown out as
 well.
  All of these things depend on the DHCP server to deliver updated
 information
  and are hindered by renewal times. Why discriminate against IP updates
 on a
  port? A failure to receive many of those other types of changes could
 result
  in just as severe of a connection disruption.

 So the reason we added the extra dhcp options extension was to support
 PXE booting physical machines for Nova baremetal, and then Ironic. It
 wasn't added for end users to use on the port, but as a generic way of
 supporting the specific PXE options needed - and that was done that
 way after discussing w/Neutron devs.

 We update ports for two reasons. Primarily, Ironic is HA and will move
 the TFTPd that boots are happening from if an Ironic node has failed.
 Secondly, because a non uncommon operation on physical machines is to
 replace broken NICs, and forcing a redeploy seemed unreasonable. The
 former case doesn't affect running nodes since its only consulted on
 reboot. The second case is by definition only possible when the NIC in
 question is offline (whether hotplug hardware or not).

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [Neutron] XenAPI questions

2015-02-03 Thread YAMAMOTO Takashi
hi Bob,

is there any news on the CI work?

do you think the idea of small proxy program can work?
i think Terry Wilson's ovsdb effort will eventually need
something similar, unless we will maintain two versions of
the library forever.

btw, when will the next XenAPI IRC meeting be?
(i checked wiki and previous meeting logs but it wasn't clear to me)

YAMAMOTO Takashi

 hi,
 
 good to hear.
 do you have any estimate when it will be available?
 will it cover dom0 side of the code found in
 neutron/plugins/openvswitch/agent/xenapi?
 
 YAMAMOTO Takashi
 
 Hi Yamamoto,
 
 XenAPI and Neutron do work well together, and we have an private CI that is 
 running Neutron jobs.  As it's not currently the public CI it's harder to 
 access logs.
 We're working on trying to move the existing XenServer CI from a 
 nova-network base to a neutron base, at which point the logs will of course 
 be publically accessible and tested against any changes, thus making it easy 
 to answer questions such as the below.
 
 Bob
 
 -Original Message-
 From: YAMAMOTO Takashi [mailto:yamam...@valinux.co.jp]
 Sent: 11 December 2014 03:17
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Neutron] XenAPI questions
 
 hi,
 
 i have questions for XenAPI folks:
 
 - what's the status of XenAPI support in neutron?
 - is there any CI covering it?  i want to look at logs.
 - is it possible to write a small program which runs with the xen
   rootwrap and proxies OpenFlow channel between domains?
   (cf. https://review.openstack.org/#/c/138980/)
 
 thank you.
 
 YAMAMOTO Takashi
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate]some qestions about Designate

2015-02-03 Thread Hayes, Graham
- Sent from my phone
On 3 Feb 2015 10:39, wujiangtaoh...@163.com wrote:

 Hi , I have some qestions about the project of Designate.

 1、Can Designate be used with openstack icehouse ?  how about Juno or kilo ?

Yup, we have both icehouse and Juno releases. We are currently working on Kilo, 
so if you want stability, I would stick to Juno

 2、I have tried to  deploy Designate using devstack of master branch. but only 
 PowerDNS are supported. Can bind9 be supported ?

Bind9 can be supported, along with a few other DNS servers

 3、when deploy designate using devstack, there are some problems: a) i can't 
 delete a domain  b) the operating of Designate doesn't  be reflected in 
 PowerDNS
 can anyone help me?  for some references  ?

Is designate-pool-manager in the list of enabled services?

Graham


 
 gentle wu
 ChinaMobile (suzhou) software technology ltd .
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]Questions about Gerrit workflow

2015-02-03 Thread Jeremy Stanley
On 2015-02-03 16:40:59 +0200 (+0200), Eduard Matei wrote:
 We have some questions regarding the Gerrit workflow and what
 should we do next for our patch to be merged:
 
 1. Once we have a CodeReview +2 and a Jenkins Verified +1 what
 should we do next to get the patch merged?

A core reviewer for that project needs to approve the change:

http://docs.openstack.org/infra/manual/core.html#approval

 2. If we have a CodeReview +2 and we want to fix an issue, does
 the next patchset keep the CR +2 ?

All CR votes other than -2 are cleared on a new patchset, unless
it's a trivial enough rebase that the output of `git patch-id`
matches between the old patchset and the new patchset:

https://review.openstack.org/Documentation/config-labels.html#label_copyAllScoresOnTrivialRebase

 3. Once the patch is merged, if we have further changes do we have
 to create a new patch (blueprint/bug report)?

Yes, once a change is approved and merged it is published and
intractable. At that point further code modification requires
putting an entirely new change through review:

http://docs.openstack.org/infra/manual/developers.html
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] SQLite support - drop or not?

2015-02-03 Thread Mike Bayer


Andrew Pashkin apash...@mirantis.com wrote:

 Mike Bayer wrote:
 The patch seems to hardcode the conventions for MySQL and Postgresql.
 The first thought I had was that in order to remove the dependence
 on them here, you’d need to instead simply turn off the
 “naming_convention” in the MetaData if you detect that you’re on one
 of those two databases. That would be a safer idea than trying to
 hardcode these conventions (and would also work for other kinds
 of backends).
 With your solution it is still will be necessary for developers
 to guess constraints names when writing new migrations. And it will
 be even harder, because they will need also to handle case of
 naming conventions”.

there’s always a naming convention in place; all databases other than SQLite 
produce them on the fly if you don’t specify one.  The purpose of the 
Alembic/SQLAlchemy naming_convention feature is so that you have *one* naming 
convention, rather than N unpredictable conventions.   I’m not sure if you’re 
arguing the feature should not be used.  IMHO it should definitely be used for 
an application that is deploying cross-database.  Otherwise you have no choice 
but to hardcode the naming conventions of each target database individually in 
all cases that you need to refer to them.




 
 Mike Bayer wrote:
 However, it’s probably worthwhile to introduce a migration that does
 in fact rename existing constraints on MySQL and Postgresql.
 Yes, that's what I want to do in case of the first solution.
 
 Mike Bayer wrote:
 Another possible solution is to drop all current migrations and
 introduce new one with correct names.
 you definitely shouldn’t need to do that.
 Why?
 
 On 30.01.2015 22:00, Mike Bayer wrote:
 Andrew Pashkin apash...@mirantis.com wrote:
 
 Working on this issue I encountered another problem.
 
 Most indices in the project has no names and because of that,
 developer must reverse-engineer them in every migration.
 Read about that also here [1].
 
 SQLAlchemy and Alembic provide feature for generation constraint
 names by pattern, specifically to resolve that kind of issues [1].
 
 I decided to introduce usage of this feature in Murano.
 
 I've implemented solution that preserves backward-compatibility
 for migration and allows to rename all constraints according
 to patterns safely [2]. With it user, that have already deployed Murano
 will be able to upgrade to new version of Murano without issues.
 
 There are downsides in this solution:
 - It assumes that all versions of Postgres and MySQL uses the
 same patterns for constraints names generation.
 - It is hard to implement a test for this solution and it will be slow.
 Because there is need to reproduce such situation when user has old
 versions of migrations applied, and then tries to upgrade.
 
 The patch seems to hardcode the conventions for MySQL and Postgresql.   The 
 first thought I had was that in order to remove the dependence on them here, 
 you’d need to instead simply turn off the “naming_convention” in the 
 MetaData if you detect that you’re on one of those two databases.   That 
 would be a safer idea than trying to hardcode these conventions (and would 
 also work for other kinds of backends).
 
 However, I’m not actually sure that you even need special behavior for these 
 two backends.  If an operator runs these migrations on a clean database, 
 then the constraints are generated with the consistent names on all 
 backends.   if a target database already has these schema constructs 
 present, then these migrations are never run; it doesn’t matter that they 
 have the right or wrong names already.
 
 I suppose then that the fear is that some PG/MySQL databases will have 
 constraints that are named in one convention, and others will have 
 constraints using the native conventions.However, the case now is that 
 all deployments are using native conventions, and being able to DROP these 
 constraints is already not very feasible unless you again were willing to 
 hardcode those naming conventions up forward.The constraints in these 
 initial migrations, assuming you don’t regenerate them, might just need to 
 be left alone, and the project proceeds in the future with a consistent 
 convention.
 
 However, it’s probably worthwhile to introduce a migration that does in fact 
 rename existing constraints on MySQL and Postgresql.  This would be a 
 migration script that emits DROP CONSTRAINT and CREATE CONSTRAINT for all 
 the above constraints that have an old name and a new name.  The script 
 would need to check the backend, as you’re doing now, in order to run, and 
 yes it would hardcode the names of those conventions, but at least it would 
 just be a one-time run against only currently deployed databases.   Since 
 your migrations are run “live”, the script can make itself a “conditional” 
 run by checking for the “old” names and skipping those that don’t exist. 
  
 
 Another possible solution is to drop all current 

Re: [openstack-dev] problems with huge pages and libvirt

2015-02-03 Thread Sahid Orentino Ferdjaoui
On Tue, Feb 03, 2015 at 03:05:24PM +0100, Sahid Orentino Ferdjaoui wrote:
 On Mon, Feb 02, 2015 at 11:44:37AM -0600, Chris Friesen wrote:
  On 02/02/2015 11:00 AM, Sahid Orentino Ferdjaoui wrote:
  On Mon, Feb 02, 2015 at 10:44:09AM -0600, Chris Friesen wrote:
  Hi,
  
  I'm trying to make use of huge pages as described in
  http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html;.
  I'm running kilo as of Jan 27th.
  I've allocated 1 2MB pages on a compute node.  virsh capabilities 
  on that node contains:
  
   topology
 cells num='2'
   cell id='0'
 memory unit='KiB'67028244/memory
 pages unit='KiB' size='4'16032069/pages
 pages unit='KiB' size='2048'5000/pages
 pages unit='KiB' size='1048576'1/pages
  ...
   cell id='1'
 memory unit='KiB'67108864/memory
 pages unit='KiB' size='4'16052224/pages
 pages unit='KiB' size='2048'5000/pages
 pages unit='KiB' size='1048576'1/pages
  
  
  I then restarted nova-compute, I set hw:mem_page_size=large on a
  flavor, and then tried to boot up an instance with that flavor.  I
  got the error logs below in nova-scheduler.  Is this a bug?
  
  Hello,
  
  Launchpad.net could be more appropriate to
  discuss on something which looks like a bug.
  
 https://bugs.launchpad.net/nova/+filebug
  
  Just wanted to make sure I wasn't missing something.  Bug has been opened at
  https://bugs.launchpad.net/nova/+bug/1417201
  
  I added some additional logs to the bug report of what the numa topology
  looks like on the compute node and in NUMATopologyFilter.host_passes().
  
  According to your trace I would say you are running different versions
  of Nova services.
  
  nova should all be the same version.  I'm running juno versions of other
  openstack components though.
 
 Hum if I understand well and according your issue reported to
 launchpad.net
 
   https://bugs.launchpad.net/nova/+bug/1417201
 
 You are trying to test hugepages under kilo which it is not possible
 since it has been implemented in this release (Juno, not yet
 published)

Please ignore this point.

 I have tried to reproduce your issue with trunk but I have not been
 able to do it. Please reopen the bug with more information of your env
 if still present. I should received any notification from it.
 
 Thanks,
 s.
 
  BTW please verify your version of libvirt. Hugepages is supported
  start to 1.2.8 (but this should difinitly not failed so badly like
  that)
  
  Libvirt is 1.2.8.
  Chris
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3 is dead, long live Python 3

2015-02-03 Thread Jeremy Stanley
On 2015-02-03 08:15:39 -0500 (-0500), Victor Stinner wrote:
[...]
 Debian Testing (Jessie) and Unstable (Sid) provide Python 3.4.2.
[...]

Yep, I'm playing now with the possibility to run jobs on Debian
Jessie, but due to circumstances with the providers who donate
computing resource to us I'm first having to make some more
significant changes to our node build tooling (changes we wanted to
make anyway, this just steps up the timetable a little).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder]Questions about Gerrit workflow

2015-02-03 Thread Eduard Matei
Hi team,

We have some questions regarding the Gerrit workflow and what should we do
next for our patch to be merged:
1. Once we have a CodeReview +2 and a Jenkins Verified +1 what should we do
next to get the patch merged?
2. If we have a CodeReview +2 and we want to fix an issue, does the next
patchset keep the CR +2 ?
3. Once the patch is merged, if we have further changes do we have to
create a new patch (blueprint/bug report)?

Thanks,
Eduard

-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The sender therefore does not
accept liability for any errors or omissions in the content of this
message, and shall have no liability for any loss or damage suffered
by the user, which arise as a result of e-mail transmission.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]Questions about Gerrit workflow

2015-02-03 Thread ChangBo Guo
2015-02-03 22:40 GMT+08:00 Eduard Matei eduard.ma...@cloudfounders.com:


 Hi team,

 We have some questions regarding the Gerrit workflow and what should we do
 next for our patch to be merged:
 1. Once we have a CodeReview +2 and a Jenkins Verified +1 what should we
 do next to get the patch merged?

We need at least two code review + 2 and will approved by core
reviewer.

 2. If we have a CodeReview +2 and we want to fix an issue, does the next
 patchset keep the CR +2 ?

will not keep +2 if change code except a rebase.

 3. Once the patch is merged, if we have further changes do we have to
 create a new patch (blueprint/bug report)?

   Yes ,need another commit ( bug or blueprint report)


 Thanks,
 Eduard

 --

 *Eduard Biceri Matei, Senior Software Developer*
 www.cloudfounders.com
  | eduard.ma...@cloudfounders.com



 *CloudFounders, The Private Cloud Software Company*

 Disclaimer:
 This email and any files transmitted with it are confidential and intended 
 solely for the use of the individual or entity to whom they are addressed.
 If you are not the named addressee or an employee or agent responsible for 
 delivering this message to the named addressee, you are hereby notified that 
 you are not authorized to read, print, retain, copy or disseminate this 
 message or any part of it. If you have received this email in error we 
 request you to notify us by reply e-mail and to delete all electronic files 
 of the message. If you are not the intended recipient you are notified that 
 disclosing, copying, distributing or taking any action in reliance on the 
 contents of this information is strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any errors or omissions in the content of this message, and 
 shall have no liability for any loss or damage suffered by the user, which 
 arise as a result of e-mail transmission.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] SQLite support - drop or not?

2015-02-03 Thread Andrew Pashkin
Mike Bayer wrote:
 The patch seems to hardcode the conventions for MySQL and Postgresql.
 The first thought I had was that in order to remove the dependence
 on them here, you’d need to instead simply turn off the
 “naming_convention” in the MetaData if you detect that you’re on one
 of those two databases. That would be a safer idea than trying to
 hardcode these conventions (and would also work for other kinds
 of backends).
With your solution it is still will be necessary for developers
to guess constraints names when writing new migrations. And it will
be even harder, because they will need also to handle case of
naming conventions.

Mike Bayer wrote:
 However, it’s probably worthwhile to introduce a migration that does
 in fact rename existing constraints on MySQL and Postgresql.
Yes, that's what I want to do in case of the first solution.

Mike Bayer wrote:
 Another possible solution is to drop all current migrations and
 introduce new one with correct names.
 you definitely shouldn’t need to do that.
Why?

On 30.01.2015 22:00, Mike Bayer wrote:
 
 
 Andrew Pashkin apash...@mirantis.com wrote:
 
 Working on this issue I encountered another problem.

 Most indices in the project has no names and because of that,
 developer must reverse-engineer them in every migration.
 Read about that also here [1].

 SQLAlchemy and Alembic provide feature for generation constraint
 names by pattern, specifically to resolve that kind of issues [1].

 I decided to introduce usage of this feature in Murano.

 I've implemented solution that preserves backward-compatibility
 for migration and allows to rename all constraints according
 to patterns safely [2]. With it user, that have already deployed Murano
 will be able to upgrade to new version of Murano without issues.

 There are downsides in this solution:
 - It assumes that all versions of Postgres and MySQL uses the
  same patterns for constraints names generation.
 - It is hard to implement a test for this solution and it will be slow.
  Because there is need to reproduce such situation when user has old
  versions of migrations applied, and then tries to upgrade.
 
 The patch seems to hardcode the conventions for MySQL and Postgresql.   The 
 first thought I had was that in order to remove the dependence on them here, 
 you’d need to instead simply turn off the “naming_convention” in the MetaData 
 if you detect that you’re on one of those two databases.   That would be a 
 safer idea than trying to hardcode these conventions (and would also work for 
 other kinds of backends).
 
 However, I’m not actually sure that you even need special behavior for these 
 two backends.  If an operator runs these migrations on a clean database, then 
 the constraints are generated with the consistent names on all backends.   if 
 a target database already has these schema constructs present, then these 
 migrations are never run; it doesn’t matter that they have the right or wrong 
 names already.
 
 I suppose then that the fear is that some PG/MySQL databases will have 
 constraints that are named in one convention, and others will have 
 constraints using the native conventions.However, the case now is that 
 all deployments are using native conventions, and being able to DROP these 
 constraints is already not very feasible unless you again were willing to 
 hardcode those naming conventions up forward.The constraints in these 
 initial migrations, assuming you don’t regenerate them, might just need to be 
 left alone, and the project proceeds in the future with a consistent 
 convention.
 
 However, it’s probably worthwhile to introduce a migration that does in fact 
 rename existing constraints on MySQL and Postgresql.  This would be a 
 migration script that emits DROP CONSTRAINT and CREATE CONSTRAINT for all the 
 above constraints that have an old name and a new name.  The script would 
 need to check the backend, as you’re doing now, in order to run, and yes it 
 would hardcode the names of those conventions, but at least it would just be 
 a one-time run against only currently deployed databases.   Since your 
 migrations are run “live”, the script can make itself a “conditional” run by 
 checking for the “old” names and skipping those that don’t exist.  
 

 Another possible solution is to drop all current migrations and
 introduce new one with correct names.
 
 you definitely shouldn’t need to do that.
 
 
 This brings us to new problem - migrations and models are out of sync
 right now in multiple places - there are different field types in
 migrations and models, migrations introduces indices that is absent
 in models, etc.

 And this solution has great downside - it is not backward-compatible,
 so all old users will lost their data.

 We (Murano team) should decide, what solution we want to use.


 [1]
 http://alembic.readthedocs.org/en/latest/naming.html#tutorial-constraint-names
 [2] https://review.openstack.org/150818

 -- 
 With kind regards, 

[openstack-dev] Cross-Project meeting, Tue February 3rd, 21:00 UTC

2015-02-03 Thread Thierry Carrez
Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a cross-project meeting today at 21:00 UTC, with the
following agenda:

* Horizon reviews for project (e.x. Sahara, Trove) panels (SergeyLukjanov)
* openstack-specs discussion
  * Add TRACE definition to log guidelines [1]
* rootwrap overhead - should we stay on this plan (sdague)
  * operators are patching out rootwrap because of it's performance
issues (from Nova midcycle)
* Bug 967832 [2] [3] (mriedem)
* Open discussion  announcements

[1] https://review.openstack.org/#/c/145245/
[2] https://bugs.launchpad.net/nova/+bug/967832
[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-February/055801.html

See you there !

For more details on this meeting, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-02-03 Thread Udi Kalifon
I think the user resource should not have roles in it. There should be a 
Role Assignment resource that grants roles to users on either tenants 
(projects) or domains. On the other hand, the user resource should have a 
domain association. Also, consider adding support for groups and in the future 
maybe also federation. As for trusts, I don't think it should be Heat's 
responsibility to set them  up, because it's up to the users themselves to 
create and grant trusts to their trustees.

- Original Message -
From: Zane Bitter zbit...@redhat.com
To: openstack-dev@lists.openstack.org
Sent: Tuesday, 3 February, 2015 12:26:41 AM
Subject: Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

On 30/01/15 02:19, Thomas Spatzier wrote:
 From: Zane Bitter zbit...@redhat.com
 To: openstack Development Mailing List
 openstack-dev@lists.openstack.org
 Date: 29/01/2015 17:47
 Subject: [openstack-dev] [Heat][Keystone] Native keystone resources in
 Heat

 I got a question today about creating keystone users/roles/tenants in
 Heat templates. We currently support creating users via the
 AWS::IAM::User resource, but we don't have a native equivalent.

 IIUC keystone now allows you to add users to a domain that is otherwise
 backed by a read-only backend (i.e. LDAP). If this means that it's now
 possible to configure a cloud so that one need not be an admin to create
 users then I think it would be a really useful thing to expose in Heat.
 Does anyone know if that's the case?

 I think roles and tenants are likely to remain admin-only, but we have
 precedent for including resources like that in /contrib... this seems
 like it would be comparably useful.

 Thoughts?

 I am really not a keystone expert, so don't know what the security
 implications would be, but I have heard the requirement or wish to be able
 to create users, roles etc. from a template many times. I've talked to
 people who want to explore this for onboarding use cases, e.g. for
 onboarding of lines of business in a company, or for onboarding customers
 in a public cloud case. They would like to be able to have templates that
 lay out the overall structure for authentication stuff, and then
 parameterize it for each onboarding process.
 If this is something to be enabled, that would be interesting to explore.

Thanks for the input everyone. I raised a spec + blueprint here:

https://review.openstack.org/152309

I don't have any immediate plans to work on this, so if anybody wants to 
grab it they'd be more than welcome :)

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ec2-api] Tagging functionality in nova's EC2 API

2015-02-03 Thread Alexandre Levine
I'm writing this in regard to several reviews concering tagging 
functionality for EC2 API in nova.

The list of the reviews concerned is here:

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/ec2-volume-and-snapshot-tags,n,z

I don't think it's a good idea to merge these reviews. The analysis is 
below:


*Tagging in AWS*

Main goal for the tagging functionality in AWS is to be able to 
efficiently distinguish various resources based on user-defined criteria:


Tags enable you to categorize your AWS resources in different ways, for 
example, by purpose, owner, or environment.

...
You can search and filter the resources based on the tags you add.

(quoted from here: 
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html)


It means that one of the two main use-cases is to be able to use Tags as 
filter when you describe something. Another one is to be able to get 
information about particular tag with all of the resources tagged by it.

Also there is a constraint:

You can tag public or shared resources, but the tags you assign are 
available only to your AWS account and not to the other accounts sharing 
the resource.


The important part here is shared resources which are visible to 
different users but tags are not shared - each user sees his own.

*
**Existing implementation in nova

*Existing implementation of tags in nova's EC2 API covers only 
instances. But it does so in both areas:

1. Tags management (create, delete, describe,...)
2. Instances filtering (describe_instances with filtering by tags).
The implementation is based on storing tags in each instance's metadata. 
And nova DB sqlalchemy level uses tag: in queries to allow instances 
describing with tag filters.


I see the following design flaws in existing implementation:

1. It uses instance's own metadata for storing information about 
assigned tags.

Problems:
- it doesn't scale when you want to start using tags for other 
resources. Following this design decision you'll have to store tags in 
other resources metadata, which mean different services APIs and other 
databases. So performance for searching for tags or tagged resources in 
main use cases should suffer. You'll have to search through several 
remote APIs, querying different metadatas to collect all info and then 
to compile the result.
- instances are not shared resources, but images are. It means that, 
when developed, metadata for images will have to store different tags 
for different users somehow.


2. EC2-specific code (tag: searching in novaDB sqlalchemy) leaked into 
lower layers of nova.
- layering is violated. There should be no EC2-specifics below EC2 API 
library in nova, ideally.
- each other service will have to implement the same solution in its own 
DB level to support tagging for EC2 API.


*Proposed review changes**
*
The review in question introduces tagging for volumes and snapshots. It 
follows design decisions of existing instance tagging implementation, 
but realizes only one of the two use cases. It provides create, 
delete, describe for tags. But it doesn't provide describe_volumes 
or describe_snapshots for filtering.


It suffers from the design flaws I listed above. It has to query remote 
API (cinder) for metadata. It didn't implement filtering by tag: in 
cinder DB level so we don't see implementation of describe_volumes with 
tags filtering.


*Current stackforge/ec2-api tagging implementation**

*In comparison, the implementation of tagging in stackforge/ec2-api, 
stores all of the tags and their links to resources and users in a 
separate place. So we can efficiently list tags and its resources or 
filter by tags during describing of some of the resources. Also 
user-specific tagging is supported.


*Conclusion

*Keeping in mind all of the above, and seeing your discussion about 
deprecation of EC2 API in nova, I don't feel it's a good time to add 
such a half-baked code with some potential problems into nova.*
*I think it's better to concentrate on cleaning up, fixing, reviving and 
making bullet-proof whatever functionality is currently present in nova 
for EC2 and used by clients.


Best regards,
  Alex Levine

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-03 Thread michael mccune

On 02/02/2015 08:58 AM, Chris Dent wrote:

This is pretty good but I think it leaves unresolved the biggest
question I've had about this process: What's so great about
converging the APIs? If we can narrow or clarify that aspect, good
to go.


+1, really good point


The implication with your statement above is that there is some kind
of ideal which maps, at least to some extent, across the rather
diverse set of resources, interactions and transactions that are
present in the OpenStack ecosystem. It may not be your intent but
the above sounds like we want all the APIs to be kinda similar in
feel or when someone is using an OpenStack-related API they'll be
able to share some knowledge between then with regard to how stuff
works.

I'm not sure how realistic^Wuseful that is when we're in an
environment with APIs with such drastically different interactions
as (to just select three) Swift, Nova and Ceilometer.


even though there are drastically different interactions among the 
services of openstack, i think there is some value to those apis having 
a similar feel to them. i always find it to be useful when i can 
generally infer some of the details about an api by it's general 
structure/design. imo, the guidelines will help to bake in some of these 
inferences.


unfortunately, baking a feel into an api guideline is more of an 
analog task. so, very difficult to codify... but i can dream =)




We've seen this rather clearly in the recent debates about handling
metadata.

Now, there's nothing in what you say above that actually straight
out disagrees with my response, but I think there's got to be some
way we can remove the ambiguity or narrow the focus. The need to
remove ambiguity is why the discussion of having a mission statement
came up.


+1



I think where we want to focus our attention is:

* strict adherence to correct HTTP
* proper use of response status codes
* effective (and correct) use of a media types
* some guidance on how to deal with change/versioning
* and _maybe_ a standard for providing actionable error responses
* setting not standards but guidelines for anything else


really solid starting point, the last point deserves emphasis too. i 
think we should be very mindful of the idea that these are guidelines 
not hard standards, but i haven't heard anyone in the meetings referring 
to them as standards. it seemed like we had consensus about the 
guidelines part.


mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Can entity calls be made to driver when entities get associated/disassociated with root entity?

2015-02-03 Thread Doug Wiegley
I’d recommend taking a look at Brandon’s review: 
https://review.openstack.org/#/c/144834/ 
https://review.openstack.org/#/c/144834/

which aims to simplify exactly what you’re describing. Please leave feedback 
there.

Thanks,
doug

 On Feb 3, 2015, at 7:13 AM, Vijay Venkatachalam 
 vijay.venkatacha...@citrix.com wrote:
 
 Hi:
  
 In OpenStack neutron lbaas implementation, when entities are created/updated 
 by the user, they might not be associated with the root entity, which is 
 loadbalancer.
 Since root entity has the driver information, the driver cannot be called by 
 lbaas plugin during these operations by user.
 Such entities are set in DEFFERED status until the entity is associated with 
 root entity.
 During this association operation (listener created with pool), the driver 
 api is called for the current operation (listener create); and the driver is 
 expected to perform the original operation (pool create) along with the 
 current operation (listener create).
 This leads to complex handling at the driver, I think it will be better for 
 the lbaas plugin to call the original operation (pool create) driver API in 
 addition to the current operation (listener create) API during the 
 association operation.
  
 That is the summary, please read on to understand the situation in detail.
  
 Let’s take the example of pool create in driver.
  
 a.   A pool create operation will not translate to a pool create api in 
 the driver. There is a pool create in the driver API but that is never called 
 today.
 b.  When a listener is created with loadbalancer and pool, the driver’s 
 listener create api is called and the driver is expected to create both pool 
 and listener.
 c.   When a listener is first created without loadbalancer but with a 
 pool, the call does not reach driver. Later when the listener is updated with 
 loadbalancer id,  the drivers listener update  API is called and the driver 
 is expected to create both pool and listener.
 d.  When a listener configured with pool and loadbalancer is updated with new 
 pool id,  the driver’s listener update api is called. The driver is expected 
 to delete the original pool that was associated, create the new pool and  
 also update the listener
   
 As you can see this is leading to a quite a bit of handling in the driver 
 code. This makes driver code complex.
  
 How about handling this logic in lbaas plugin and it can call the “natural” 
 functions that were deferred.
  
 Whenever an entity is going from a DEFERRED to ACTIVE/CREATE status (through 
 whichever workflow) the plugin can call the CREATE pool function of the 
 driver.
 Whenever an entity is going from an ACTIVE/CREATED to DEFERRED status 
 (through whichever workflow) the plugin can call the DELETE pool function of 
 the driver.
  
 Thanks,
 Vijay V.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Can entity calls be made to driver when entities get associated/disassociated with root entity?

2015-02-03 Thread Doug Wiegley
I’d recommend taking a look at Brandon’s review: 
https://review.openstack.org/#/c/144834/ 
https://review.openstack.org/#/c/144834/

which aims to simplify exactly what you’re describing. Please leave feedback 
there.

Thanks,
doug

 On Feb 3, 2015, at 7:13 AM, Vijay Venkatachalam 
 vijay.venkatacha...@citrix.com wrote:
 
 Hi:
  
 In OpenStack neutron lbaas implementation, when entities are created/updated 
 by the user, they might not be associated with the root entity, which is 
 loadbalancer.
 Since root entity has the driver information, the driver cannot be called by 
 lbaas plugin during these operations by user.
 Such entities are set in DEFFERED status until the entity is associated with 
 root entity.
 During this association operation (listener created with pool), the driver 
 api is called for the current operation (listener create); and the driver is 
 expected to perform the original operation (pool create) along with the 
 current operation (listener create).
 This leads to complex handling at the driver, I think it will be better for 
 the lbaas plugin to call the original operation (pool create) driver API in 
 addition to the current operation (listener create) API during the 
 association operation.
  
 That is the summary, please read on to understand the situation in detail.
  
 Let’s take the example of pool create in driver.
  
 a.   A pool create operation will not translate to a pool create api in 
 the driver. There is a pool create in the driver API but that is never called 
 today.
 b.  When a listener is created with loadbalancer and pool, the driver’s 
 listener create api is called and the driver is expected to create both pool 
 and listener.
 c.   When a listener is first created without loadbalancer but with a 
 pool, the call does not reach driver. Later when the listener is updated with 
 loadbalancer id,  the drivers listener update  API is called and the driver 
 is expected to create both pool and listener.
 d.  When a listener configured with pool and loadbalancer is updated with new 
 pool id,  the driver’s listener update api is called. The driver is expected 
 to delete the original pool that was associated, create the new pool and  
 also update the listener
   
 As you can see this is leading to a quite a bit of handling in the driver 
 code. This makes driver code complex.
  
 How about handling this logic in lbaas plugin and it can call the “natural” 
 functions that were deferred.
  
 Whenever an entity is going from a DEFERRED to ACTIVE/CREATE status (through 
 whichever workflow) the plugin can call the CREATE pool function of the 
 driver.
 Whenever an entity is going from an ACTIVE/CREATED to DEFERRED status 
 (through whichever workflow) the plugin can call the DELETE pool function of 
 the driver.
  
 Thanks,
 Vijay V.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Can entity calls be made to driver when entities get associated/disassociated with root entity?

2015-02-03 Thread Doug Wiegley
I’d recommend taking a look at Brandon’s review:
https://review.openstack.org/#/c/144834/

which aims to simplify exactly what you’re describing. Please leave
feedback there.

Thanks,
doug


On Tue, Feb 3, 2015 at 7:13 AM, Vijay Venkatachalam 
vijay.venkatacha...@citrix.com wrote:

  Hi:



 In OpenStack neutron lbaas implementation, when entities are
 created/updated by the user, they might not be associated with the root
 entity, which is loadbalancer.

 Since root entity has the driver information, the driver cannot be called
 by lbaas plugin during these operations by user.

 Such entities are set in DEFFERED status until the entity is associated
 with root entity.

 During this association operation (listener created with pool), the driver
 api is called for the current operation (listener create); and the driver
 is expected to perform the original operation (pool create) along with the
 current operation (listener create).

 This leads to complex handling at the driver, I think it will be better
 for the lbaas plugin to call the original operation (pool create) driver
 API in addition to the current operation (listener create) API during the
 association operation.



 That is the summary, please read on to understand the situation in detail.



 Let’s take the example of pool create in driver.



 a.   A pool create operation will not translate to a pool create api
 in the driver. There is a pool create in the driver API but that is never
 called today.

 b.  When a listener is created with loadbalancer and pool, the
 driver’s listener create api is called and the driver is expected to create
 both pool and listener.

 c.   When a listener is first created without loadbalancer but with a
 pool, the call does not reach driver. Later when the listener is updated
 with loadbalancer id,  the drivers listener update  API is called and the
 driver is expected to create both pool and listener.

 d.  When a listener configured with pool and loadbalancer is updated with
 new pool id,  the driver’s listener update api is called. The driver is
 expected to delete the original pool that was associated, create the new
 pool and  also update the listener



 As you can see this is leading to a quite a bit of handling in the driver
 code. This makes driver code complex.



 How about handling this logic in lbaas plugin and it can call the
 “natural” functions that were deferred.



 Whenever an entity is going from a DEFERRED to ACTIVE/CREATE status
 (through whichever workflow) the plugin can call the CREATE pool function
 of the driver.

 Whenever an entity is going from an ACTIVE/CREATED to DEFERRED status
 (through whichever workflow) the plugin can call the DELETE pool function
 of the driver.



 Thanks,

 Vijay V.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting

2015-02-03 Thread Peter Pouliot
Hi All,

Due to too many members of the team being unable to attend today, we're going 
to postpone the meeting until next week.

p

Peter J. Pouliot CISSP
Microsoft Enterprise Cloud Solutions
C:\OpenStack
New England Research  Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.commailto:ppoul...@microsoft.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-03 Thread michael mccune

On 02/02/2015 02:51 PM, Stefano Maffulli wrote:


 To improve developer experience converging the OpenStack API to
 a consistent and pragmatic RESTful design. The working group
 creates guidelines that all OpenStack projects should follow,
 avoids introducing backwards incompatible changes in existing
 APIs and promotes convergence of new APIs and future versions of
 existing APIs.



i like this, definitely a step in the right direction. i like Chris' 
comment (from a different email) about specifying _why_ convergence is 
something we are striving for, or even why it is good. i'm not sure how 
we fit that message into something this tightly crafted, but it's a nice 
consideration.


mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Can entity calls be made to driver when entities get associated/disassociated with root entity?

2015-02-03 Thread Jain, Vivek
Hi Vijay,
My understanding was that each vendor will have different behavior for entity 
creation. LBaaS apis will mark each entity as PENDING_CREATE (?) initially and 
its up to specific vendor driver whether to mark entities as DEFERRED or 
actually CREATE them on LB. Vendor logic can decide the action based on whether 
hierarchy/criteria is met per their LB prerequisite.

Thanks,
Vivek

From: Vijay Venkatachalam 
vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, February 3, 2015 at 5:13 AM
To: OpenStack Development Mailing List 
(openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron][lbaas] Can entity calls be made to driver 
when entities get associated/disassociated with root entity?

Hi:

In OpenStack neutron lbaas implementation, when entities are created/updated by 
the user, they might not be associated with the root entity, which is 
loadbalancer.
Since root entity has the driver information, the driver cannot be called by 
lbaas plugin during these operations by user.
Such entities are set in DEFFERED status until the entity is associated with 
root entity.
During this association operation (listener created with pool), the driver api 
is called for the current operation (listener create); and the driver is 
expected to perform the original operation (pool create) along with the current 
operation (listener create).
This leads to complex handling at the driver, I think it will be better for the 
lbaas plugin to call the original operation (pool create) driver API in 
addition to the current operation (listener create) API during the association 
operation.

That is the summary, please read on to understand the situation in detail.

Let’s take the example of pool create in driver.


a.   A pool create operation will not translate to a pool create api in the 
driver. There is a pool create in the driver API but that is never called today.

b.  When a listener is created with loadbalancer and pool, the driver’s 
listener create api is called and the driver is expected to create both pool 
and listener.

c.   When a listener is first created without loadbalancer but with a pool, 
the call does not reach driver. Later when the listener is updated with 
loadbalancer id,  the drivers listener update  API is called and the driver is 
expected to create both pool and listener.

d.  When a listener configured with pool and loadbalancer is updated with new 
pool id,  the driver’s listener update api is called. The driver is expected to 
delete the original pool that was associated, create the new pool and  also 
update the listener

As you can see this is leading to a quite a bit of handling in the driver code. 
This makes driver code complex.

How about handling this logic in lbaas plugin and it can call the “natural” 
functions that were deferred.

Whenever an entity is going from a DEFERRED to ACTIVE/CREATE status (through 
whichever workflow) the plugin can call the CREATE pool function of the driver.
Whenever an entity is going from an ACTIVE/CREATED to DEFERRED status (through 
whichever workflow) the plugin can call the DELETE pool function of the driver.

Thanks,
Vijay V.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-02-03 Thread Brian Haley
On 02/03/2015 05:10 AM, Kevin Benton wrote:
The unicast DHCP will make it to the wire, but if you've renumbered the
 subnet either a) the DHCP server won't respond because it's IP has changed as
 well; or b) the DHCP server won't respond because there is no mapping for the 
 VM
 on it's old subnet.
 
 We aren't changing the DHCP server's IP here. The process that I saw was to 
 add
 a subnet and start moving VMs over. It's not 'b' either, because the server
 generates a DHCPNAK in response and which will immediately cause the client to
 release/renew. I have verified this behavior already and recorded a packet
 capture for you.[1] 
 
 In the capture, the renewal value is 4 seconds. I captured one renewal before
 the IP address change from 99.99.99.5 to 10.0.0.25 took place. You can see on
 the next renewal, the DHCP server immediately generates a NACK. The client 
 then
 releases its address, requests a new one, assigns it and ACKs within a couple 
 of
 seconds. 

Thanks for the trace.  So one thing I noticed is that this unicast DHCP only got
to the server since you created a second subnet on this network (dest MAC of
packet was that of same router interface).  If you had created a second network
and subnet this would have been dropped (different broadcast domain).  These
little differences are things users need to know because they lead to heads
banging on desks :(

This would happen if the AZ their VM was in went offline as well, at which
 point they would change their design to be more cloud-aware than it was.  
 Let's
 not heap all the blame on neutron - the user is tasked with vetting that
 their decisions meet the requirements they desire by thoroughly testing it.
 
 An availability zone going offline is not the same as an API operation that
 takes a day to apply. In an internal cloud, maintenance for AZs can be
 advertised and planned around by tenants running single-AZ services. Even if 
 you
 want to reference a public cloud, look how much of the Internet breaks when
 Amazon's us-east-1a or us-east-1d AZs have issues. Even though people are
 supposed to be bringing cattle to the cloud, a huge portion already have pets
 that they are attached to or that they can't convert into cattle. 

You completely missed the context of my reply Kevin - an AZ failure is not a
planned event.  You said people bring pets along, and rebooting them is painful.
 I said that's a bad design because other things can cause it to go offline, for
example:

1. Compute node failure
2. Network node failure
3. Router/switch failure
4. Internet failure
...
99. API call

All the user knows is they can't reach their VM - the cause doesn't matter when
they can't sell their widgets to customers because their site is down.  If it
takes 10 minutes for them to re-create their instance elsewhere that cannot be
blamed on neutron, even if it was our API call that caused it to go offline.

 If our floating IP 'associate' action took 12 hours to take effect on a 
 running
 instance, would telling users to reboot their instances to apply floating IPs
 faster be okay? I would certainly heap the blame on Neutron there.

The difference in a port IP change API call is that it requires action on the
VMs part that neutron can't trigger immediately.  It's still asynchronous like a
floating IP call, but the delay is typically going to be longer.  All we can say
is it will take from (0 - interval) seconds.  How is warning the user about
this a bad thing?

How about a big (*) next to all the things that could cause issues?  :)
 
 You want to put it next to all of the API calls to put the burden on the 
 users.
 I want to put it next to the DHCP renewal interval in the config files to put
 the burden on the operators. :)
 
 (*) Increasing this value will increase the delay between API calls and when
 they take effect on the data plane for any that depend on DHCP to relay the
 information. (e.g. port IP/subnet changes, port dhcp option changes, subnet
 gateways, subnet routes, subnet DNS servers, etc)

There is no delay in the API call here, the port was updated just as the user
requested.  Since they can't see into my config file (unless they look at their
lease info or run a tcpdump trace) they are essentially making a blind change
that immediately affects their instance.

And adding a DHCP option to tell them to renew more frequently doesn't fix the
problem, it only lessens it to ~(interval/2) - that might not be acceptable to
users and they need to know the danger.  This is the one point I've been trying
to get across in this whole discussion - these are advanced options that users
need to take caution with, neutron can only do so much.

-Brian


 1. http://paste.openstack.org/show/166048/
 
 
 On Mon, Feb 2, 2015 at 8:57 AM, Brian Haley brian.ha...@hp.com
 mailto:brian.ha...@hp.com wrote:
 
 Kevin,
 
 I think we are finally converging.  One of the points I've been trying to 
 make
 is 

Re: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled

2015-02-03 Thread John Belamaric
Hi Paddu,

I think this is less an issue of the pluggable IPAM than it is the Neutron 
management layer, which requires an IP for a port, as far as I know. If the 
management layer is updated to allow a port to exist without a known IP, then 
an additional IP request type could be added to represent the placeholder you 
describing.

However, doing so leaves IPAM out of the hands of Neutron and out of the hands 
of the external (presumably authoritative) IPAM system. This could lead to 
duplicate IP issues since each VM is deciding its own IP without any 
centralized coordination. I wouldn't recommend this approach to managing your 
IP space.

John

From: Padmanabhan Krishnan kpr...@yahoo.commailto:kpr...@yahoo.com
Reply-To: Padmanabhan Krishnan kpr...@yahoo.commailto:kpr...@yahoo.com
Date: Wednesday, January 28, 2015 at 4:58 PM
To: John Belamaric jbelama...@infoblox.commailto:jbelama...@infoblox.com, 
OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] fixed ip info shown for port even when 
dhcp is disabled

Some follow up questions on this.

In the specs, i see that during a create_port,  there's provisions to query 
the external source by  Pluggable IPAM to return the IP.
This works fine for cases where the external source (say, DHCP server) can be 
queried for the IP address when a launch happens.

Is there a provision to have the flexibility of a late IP assignment?

I am thinking of cases, like the temporary unavailability of external IP source 
or lack of standard interfaces in which case data packet snooping is used to 
find the IP address of a VM after launch. Something similar to late binding of 
IP addresses.
This means the create_port  may not get the IP address from the pluggable IPAM. 
In that case, launch of a VM (or create_port) shouldn't fail. The Pluggable 
IPAM should have some provision to return something equivalent to unavailable 
during create_port and be able to do an update_port when the IP address becomes 
available.

I don't see that option. Please correct me if I am wrong.

Thanks,
Paddu


On Thursday, December 18, 2014 7:59 AM, Padmanabhan Krishnan 
kpr...@yahoo.commailto:kpr...@yahoo.com wrote:


Hi John,
Thanks for the pointers. I shall take a look and get back.

Regards,
Paddu


On Thursday, December 18, 2014 6:23 AM, John Belamaric 
jbelama...@infoblox.commailto:jbelama...@infoblox.com wrote:


Hi Paddu,

Take a look at what we are working on in Kilo [1] for external IPAM. While this 
does not address DHCP specifically, it does allow you to use an external source 
to allocate the IP that OpenStack uses, which may solve your problem.

Another solution to your question is to invert the logic - you need to take the 
IP allocated by OpenStack and program the DHCP server to provide a fixed IP for 
that MAC.

You may be interested in looking at this Etherpad [2] that Don Kehn put 
together gathering all the various DHCP blueprints and related info, and also 
at this BP [3] for including a DHCP relay so we can utilize external DHCP more 
easily.

[1] https://blueprints.launchpad.net/neutron/+spec/neutron-ipam
[2] https://etherpad.openstack.org/p/neutron-dhcp-org
[3] https://blueprints.launchpad.net/neutron/+spec/dhcp-relay

John

From: Padmanabhan Krishnan kpr...@yahoo.commailto:kpr...@yahoo.com
Reply-To: Padmanabhan Krishnan kpr...@yahoo.commailto:kpr...@yahoo.com, 
OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, December 17, 2014 at 6:06 PM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] fixed ip info shown for port even when 
dhcp is disabled

This means whatever tools the operators are using, it need to make sure the IP 
address assigned inside the VM matches with Openstack has assigned to the port.
Bringing the question that i had in another thread on the same topic:

If one wants to use the provider DHCP server and not have Openstack's DHCP or 
L3 agent/DVR, it may not be possible to do so even with DHCP disabled in 
Openstack network. Even if the provider DHCP server is configured with the same 
start/end range in the same subnet, there's no guarantee that it will match 
with Openstack assigned IP address for bulk VM launches or  when there's a 
failure case.
So, how does one deploy external DHCP with Openstack?

If Openstack hasn't assigned a IP address when DHCP is disabled for a network, 
can't port_update be done with the provider DHCP specified IP address to put 
the anti-spoofing and security rules?
With Openstack assigned IP address, port_update cannot be done since IP address 
aren't in sync and can overlap.

Thanks,
Paddu



On 12/16/14 4:30 AM, Pasquale Porreca 

Re: [openstack-dev] [nova] [api] Get servers with limit and IP address filter

2015-02-03 Thread Steven Kaufer
Vishvananda Ishaya vishvana...@gmail.com wrote on 01/28/2015 11:32:16 AM:

 From: Vishvananda Ishaya vishvana...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 01/28/2015 11:50 AM
 Subject: Re: [openstack-dev] [nova] [api] Get servers with limit and
 IP address filter

 On Jan 28, 2015, at 7:05 AM, Steven Kaufer kau...@us.ibm.com wrote:

 Vishvananda Ishaya vishvana...@gmail.com wrote on 01/27/2015 04:29:50
PM:

  From: Vishvananda Ishaya vishvana...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Date: 01/27/2015 04:32 PM
  Subject: Re: [openstack-dev] [nova] [api] Get servers with limit and
  IP address filter
 
  The network info for an instance is cached as a blob of data
  (neutron has the canonical version in most installs), so it isn’t
  particularly easy to do at the database layer. You would likely need
  a pretty complex stored procedure to do it accurately.
 
  Vish

 Vish,

 Thanks for the reply.

 I agree with your point about the difficultly in accurately querying
 the blob of data; however, IMHO, the complexity this fix does not
 preclude the current behavior as being classified as a bug.

 With that in mind, I was wondering if anyone in the community has
 any thoughts on if the current behavior is considered a bug?

 Yes it should be classified as a bug.

Bug filed: https://bugs.launchpad.net/nova/+bug/1417649


 If so, how should it be resolved? A couple options that I could think of:

 1. Disallow the combination of using both a limit and an IP address
 filter by raising an error.

 I think this is the simplest solution.

 Vish

 2. Workaround the problem by removing the limit from the DB query
 and then manually limiting the list of servers (after manually
 applying the IP address filter).

I have proposed a fix that implements this solution:
https://review.openstack.org/#/c/152614

Thanks,
Steven Kaufer

 3. Break up the query so that the server UUIDs that match the IP
 filter are retrieved first and then used as a UUID DB filter. As far
 as I can tell, this type of solution was originally implemented but
 the network query was deemed to expensive [1]. Is there a less
 expensive method to determine the UUIDs (possibly querying the
 cached 'network_info' in the 'instance_info_caches' table)?
 4. Figure out how to accurately query the blob of network info that
 is cached in the nova DB and apply the IP filter at the DB layer.

 [1]: https://review.openstack.org/#/c/131460/

 Thanks,
 Steven Kaufer__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] SQLite support - drop or not?

2015-02-03 Thread Mike Bayer


Andrew Pashkin apash...@mirantis.com wrote:

 Mike Bayer wrote:
 there’s always a naming convention in place; all databases other than
 SQLite produce them on the fly if you don’t specify one.  The purpose
 of the Alembic/SQLAlchemy naming_convention feature is so that you
 have *one* naming convention, rather than N unpredictable conventions.
 I’m not sure if you’re arguing the feature should not be used.  IMHO
 it should definitely be used for an application that is deploying
 cross-database.  Otherwise you have no choice but to hardcode the
 naming conventions of each target database individually in all cases
 that you need to refer to them.
 You can't just bring SA/Alembic naming conventions into the project,
 because they will collide with auto-generated constraint names.

I was proposing a way to fix this for the murano project which only appears to 
have four migrations so far, but with the assumption that there are existing 
production environments which cannot do a full rebuild.

 
 So you need to hardcode reverese-engineered constrants names into the
 old migrations and then add new migration that renames constraint
 according with naming conventions”.
 OR you need to drop old
 migrations, and create new one with naming conventions - that will
 be backward incompatible, but cleaner.


My proposal was to essentially do both strategies.   Build out fully clean 
migrations from the start, but also add an additional “conditional” migration 
that will repair a Postgresql / MySQL database that is already at the head, and 
is detected as having the older naming convention.  Because openstack does not 
appear to use offline migrations, this would be doable, though not necessarily 
worth it.

If Murano can afford to just restart with clean migrations and has no 
production deployments yet which would be disrupted by a full rebuild, then 
sure, just do this.





 
 On 03.02.2015 18:32, Mike Bayer wrote:
 Andrew Pashkin apash...@mirantis.com wrote:
 
 Mike Bayer wrote:
 The patch seems to hardcode the conventions for MySQL and Postgresql.
 The first thought I had was that in order to remove the dependence
 on them here, you’d need to instead simply turn off the
 “naming_convention” in the MetaData if you detect that you’re on one
 of those two databases. That would be a safer idea than trying to
 hardcode these conventions (and would also work for other kinds
 of backends).
 With your solution it is still will be necessary for developers
 to guess constraints names when writing new migrations. And it will
 be even harder, because they will need also to handle case of
 naming conventions”.
 
 there’s always a naming convention in place; all databases other than SQLite 
 produce them on the fly if you don’t specify one.  The purpose of the 
 Alembic/SQLAlchemy naming_convention feature is so that you have *one* 
 naming convention, rather than N unpredictable conventions.   I’m not sure 
 if you’re arguing the feature should not be used.  IMHO it should definitely 
 be used for an application that is deploying cross-database.  Otherwise you 
 have no choice but to hardcode the naming conventions of each target 
 database individually in all cases that you need to refer to them.
 
 
 
 
 Mike Bayer wrote:
 However, it’s probably worthwhile to introduce a migration that does
 in fact rename existing constraints on MySQL and Postgresql.
 Yes, that's what I want to do in case of the first solution.
 
 Mike Bayer wrote:
 Another possible solution is to drop all current migrations and
 introduce new one with correct names.
 you definitely shouldn’t need to do that.
 Why?
 
 On 30.01.2015 22:00, Mike Bayer wrote:
 Andrew Pashkin apash...@mirantis.com wrote:
 
 Working on this issue I encountered another problem.
 
 Most indices in the project has no names and because of that,
 developer must reverse-engineer them in every migration.
 Read about that also here [1].
 
 SQLAlchemy and Alembic provide feature for generation constraint
 names by pattern, specifically to resolve that kind of issues [1].
 
 I decided to introduce usage of this feature in Murano.
 
 I've implemented solution that preserves backward-compatibility
 for migration and allows to rename all constraints according
 to patterns safely [2]. With it user, that have already deployed Murano
 will be able to upgrade to new version of Murano without issues.
 
 There are downsides in this solution:
 - It assumes that all versions of Postgres and MySQL uses the
 same patterns for constraints names generation.
 - It is hard to implement a test for this solution and it will be slow.
 Because there is need to reproduce such situation when user has old
 versions of migrations applied, and then tries to upgrade.
 
 The patch seems to hardcode the conventions for MySQL and Postgresql.   
 The first thought I had was that in order to remove the dependence on them 
 here, you’d need to instead simply turn off the “naming_convention” in the 
 MetaData if you 

[openstack-dev] [cinder] documenting volume replication

2015-02-03 Thread Ronen Kat
As some of you are aware the spec for replication
is not up to date, 
The current developer documentation,
http://docs.openstack.org/developer/cinder/api/cinder.volume.driver.html,
cover replication but some folks indicated that it need additional details.

In order to get the spec and documentation
up to date I created an Etherpad to be a base for the update.
The Etherpad page is on https://etherpad.openstack.org/p/cinder-replication-redoc

I would appreciate if interested parties
would take a look at the Etherpad, add comments, details, questions and
feedback.

Ronen,


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-03 Thread Sahid Orentino Ferdjaoui
On Mon, Feb 02, 2015 at 11:44:37AM -0600, Chris Friesen wrote:
 On 02/02/2015 11:00 AM, Sahid Orentino Ferdjaoui wrote:
 On Mon, Feb 02, 2015 at 10:44:09AM -0600, Chris Friesen wrote:
 Hi,
 
 I'm trying to make use of huge pages as described in
 http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html;.
 I'm running kilo as of Jan 27th.
 I've allocated 1 2MB pages on a compute node.  virsh capabilities on 
 that node contains:
 
  topology
cells num='2'
  cell id='0'
memory unit='KiB'67028244/memory
pages unit='KiB' size='4'16032069/pages
pages unit='KiB' size='2048'5000/pages
pages unit='KiB' size='1048576'1/pages
 ...
  cell id='1'
memory unit='KiB'67108864/memory
pages unit='KiB' size='4'16052224/pages
pages unit='KiB' size='2048'5000/pages
pages unit='KiB' size='1048576'1/pages
 
 
 I then restarted nova-compute, I set hw:mem_page_size=large on a
 flavor, and then tried to boot up an instance with that flavor.  I
 got the error logs below in nova-scheduler.  Is this a bug?
 
 Hello,
 
 Launchpad.net could be more appropriate to
 discuss on something which looks like a bug.
 
https://bugs.launchpad.net/nova/+filebug
 
 Just wanted to make sure I wasn't missing something.  Bug has been opened at
 https://bugs.launchpad.net/nova/+bug/1417201
 
 I added some additional logs to the bug report of what the numa topology
 looks like on the compute node and in NUMATopologyFilter.host_passes().
 
 According to your trace I would say you are running different versions
 of Nova services.
 
 nova should all be the same version.  I'm running juno versions of other
 openstack components though.

Hum if I understand well and according your issue reported to
launchpad.net

  https://bugs.launchpad.net/nova/+bug/1417201

You are trying to test hugepages under kilo which it is not possible
since it has been implemented in this release (Juno, not yet
published)

I have tried to reproduce your issue with trunk but I have not been
able to do it. Please reopen the bug with more information of your env
if still present. I should received any notification from it.

Thanks,
s.

 BTW please verify your version of libvirt. Hugepages is supported
 start to 1.2.8 (but this should difinitly not failed so badly like
 that)
 
 Libvirt is 1.2.8.
 Chris
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-02-03 Thread Thierry Carrez
Jesse Pretorius wrote:
 I think that perhaps something that shouldn't be lost site of is that
 the users using the EC2 API are using it as-is. The only commitment that
 needs to be made is to maintain the functionality that's already there,
 rather than attempt to keep it up to scratch with newer functionality
 that's come into EC2.
 
 The stackforge project can perhaps be the incubator for the development
 of a full replacement which is more up-to-date and interacts more like a
 translator. Once it's matured enough that the users want to use it
 instead of the old EC2 API in-tree, then perhaps deprecation is the
 right option.
 
 Between now and then, I must say that I agree with Sean - perhaps the
 best strategy would be to make it clear somehow that the EC2 API isn't a
 fully tested or up-to-date API.

Right, there are several dimensions in the issue we are discussing.

- I completely agree we should communicate clearly the status of the
in-tree EC2 API to our users.

- Deprecation is a mechanism by which we communicate to our users that
they need to evolve their current usage of OpenStack. It should not be
used lightly, and it should be a reliable announcement. In the past we
deprecated things based on a promised replacement plan that never
happened, and we had to un-deprecate. I would very much prefer if we
didn't do that ever again, because it's training users to ignore our
deprecation announcements. That is what I meant in my earlier email. We
/can/ deprecate, but only when we are 99.9% sure we will follow up on that.

- The supposed 35% of our users are actually more 44% of the user
survey respondents replying yes when asked if they ever used the EC2
API in their deployment of OpenStack. Given that it's far from being up
to date or from behaving fully like the current Amazon EC2 API, it's
fair to say that those users are probably more interested in keeping the
current OpenStack EC2 API support as-is, than they are interested in a
new project that will actually make it better and/or different.

- Given legal uncertainty about closed APIs it might make *legal* sense
to remove it from Nova or at least mark it deprecated and freeze it
until that removal can happen. Projects in Stackforge are, by
definition, not OpenStack projects, and therefore do not carry the same
risk.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] operators vs users for choosing convergence engine

2015-02-03 Thread Sergey Kraynev
I really like some benefits of this approach:
 - easy use in functional tests (did not require additional job)
 - every user can try from this moment
 - we can use both code ways in the same time without service restarting

So I really close to give my +1, if someone give answers on questions from
other side:
 - as Angus said, how we plan deprecate it then (when new code will become
default) more softer for users?
 - As I understand the parameter option will have upper priority then
config.
   so operator has not way to block it from users and they should decide
themselves what they want (what is more safely)? (and of course choose new,
because it sounds better)
 - I really afraid, that some users will start use new option and will be
disappointed, due to they expect a lot of benefits, but the reality will be
different.
   IMO, exposing so young feature looks cool for development part and
gathering feedback, but dangerous for stable product users + bad example
of commercial.

If somebody help me to remove these doubts, I will be thankful to him :)

Regards,
Sergey.

On 3 February 2015 at 03:52, Steve Baker sba...@redhat.com wrote:

 A spec has been raised to add a config option to allow operators to choose
 whether to use the new convergence engine for stack operations. For some
 context you should read the spec first [1]

 Rather than doing this, I would like to propose the following:
 * Users can (optionally) choose which engine to use by specifying an
 engine parameter on stack-create (choice of classic or convergence)
 * Operators can set a config option which determines which engine to use
 if the user makes no explicit choice
 * Heat developers will set the default config option from classic to
 convergence when convergence is deemed sufficiently mature

 I realize it is not ideal to expose this kind of internal implementation
 detail to the user, but choosing convergence _will_ result in different
 stack behaviour (such as multiple concurrent update operations) so there is
 an argument for giving the user the choice. Given enough supporting
 documentation they can choose whether convergence might be worth trying for
 a given stack (for example, a large stack which receives frequent updates)

 Operators likely won't feel they have enough knowledge to make the call
 that a heat install should be switched to using all convergence, and users
 will never be able to try it until the operators do (or the default
 switches).

 Finally, there are also some benefits to heat developers. Creating a whole
 new gate job to test convergence-enabled heat will consume its share of CI
 resource. I'm hoping to make it possible for some of our functional tests
 to run against a number of scenarios/environments. Being able to run tests
 under classic and convergence scenarios in one test run will be a great
 help (for performance profiling too).

 If there is enough agreement then I'm fine with taking over and updating
 the convergence-config-option spec.

 [1] https://review.openstack.org/#/c/152301/2/specs/kilo/
 convergence-config-option.rst

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-03 Thread Jay Pipes

On 02/02/2015 09:07 PM, Everett Toews wrote:

On Feb 2, 2015, at 7:24 PM, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:


On 02/02/2015 05:35 PM, Jay Pipes wrote:

On 01/29/2015 12:41 PM, Sean Dague wrote:

Correct. This actually came up at the Nova mid cycle in a side
conversation with Ironic and Neutron folks.

HTTP error codes are not sufficiently granular to describe what happens
when a REST service goes wrong, especially if it goes wrong in a way
that would let the client do something other than blindly try the same
request, or fail.

Having a standard json error payload would be really nice.

{
 fault: ComputeFeatureUnsupportedOnInstanceType,
 messsage: This compute feature is not supported on this kind of
instance type. If you need this feature please use a different instance
type. See your cloud provider for options.
}

That would let us surface more specific errors.

snip


Standardization here from the API WG would be really great.


What about having a separate HTTP header that indicates the OpenStack
Error Code, along with a generated URI for finding more information
about the error?

Something like:

X-OpenStack-Error-Code: 1234
X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234

That way is completely backwards compatible (since we wouldn't be
changing response payloads) and we could handle i18n entirely via the
HTTP help service running on errors.openstack.org
http://errors.openstack.org.


That could definitely be implemented in the short term, but if we're
talking about API WG long term evolution, I'm not sure why a standard
error payload body wouldn't be better.


Agreed. And using the “X-“ prefix in headers has been deprecated for
over 2 years now [1]. I don’t think we should be using it for new things.

Everett

[1] https://tools.ietf.org/html/rfc6648


Ha! Good to know about the X- stuff :) Duly noted!

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-02-03 Thread Sean Dague
On 02/03/2015 08:57 AM, Thierry Carrez wrote:
 Jesse Pretorius wrote:
 I think that perhaps something that shouldn't be lost site of is that
 the users using the EC2 API are using it as-is. The only commitment that
 needs to be made is to maintain the functionality that's already there,
 rather than attempt to keep it up to scratch with newer functionality
 that's come into EC2.

 The stackforge project can perhaps be the incubator for the development
 of a full replacement which is more up-to-date and interacts more like a
 translator. Once it's matured enough that the users want to use it
 instead of the old EC2 API in-tree, then perhaps deprecation is the
 right option.

 Between now and then, I must say that I agree with Sean - perhaps the
 best strategy would be to make it clear somehow that the EC2 API isn't a
 fully tested or up-to-date API.
 
 Right, there are several dimensions in the issue we are discussing.
 
 - I completely agree we should communicate clearly the status of the
 in-tree EC2 API to our users.
 
 - Deprecation is a mechanism by which we communicate to our users that
 they need to evolve their current usage of OpenStack. It should not be
 used lightly, and it should be a reliable announcement. In the past we
 deprecated things based on a promised replacement plan that never
 happened, and we had to un-deprecate. I would very much prefer if we
 didn't do that ever again, because it's training users to ignore our
 deprecation announcements. That is what I meant in my earlier email. We
 /can/ deprecate, but only when we are 99.9% sure we will follow up on that.
 
 - The supposed 35% of our users are actually more 44% of the user
 survey respondents replying yes when asked if they ever used the EC2
 API in their deployment of OpenStack. Given that it's far from being up
 to date or from behaving fully like the current Amazon EC2 API, it's
 fair to say that those users are probably more interested in keeping the
 current OpenStack EC2 API support as-is, than they are interested in a
 new project that will actually make it better and/or different.

All of which is fair, however there is actually no such thing as
keeping support as-is. The EC2 API is the equivalent of parts of Nova
+ Neutron + Cinder + Keystone + Swift. However the whole thing is
implemented in Nova. Nova, for instances, has a terrible s3 object store
in tree to make any of this work (so that the EC2 API doesn't actually
depend on Swift). As the projects drift away and change their semantics,
and bump their APIs keeping the same support is real work, that's not
getting done.

It will become different over time regardless, the real question is if
it gets different worse or different better.

 - Given legal uncertainty about closed APIs it might make *legal* sense
 to remove it from Nova or at least mark it deprecated and freeze it
 until that removal can happen. Projects in Stackforge are, by
 definition, not OpenStack projects, and therefore do not carry the same
 risk.
 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3 is dead, long live Python 3

2015-02-03 Thread Victor Stinner
Hi,

It's good to move forward to Python 3.4 :-)

 [2] https://launchpad.net/bugs/1367907

This bug was introduced in Python 3.4.0 and fixed in Python 3.4.1. It's too bad 
that Ubunbu Trusty didn't upgraded yet Python 3.4 to 3.4.1 (released 6 months 
ago) or 3.4.2. Request to upgrade python 3.4 in Ubuntu Trusty:
https://bugs.launchpad.net/ubuntu/+source/python3.4/+bug/1348954
(upgrade already scheduled at the end of february, after the release of python 
3.4.3)

Debian Testing (Jessie) and Unstable (Sid) provide Python 3.4.2. Debian Stable 
(Wheezy) only provides Python 3.2.3 (which doesn't accept uunicode syntax :-/ 
and doesn't support yield-from).

Fedora 21 provides Python 3.4.1. (Fedora 20 only provides Python 3.3.2).

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas] Can entity calls be made to driver when entities get associated/disassociated with root entity?

2015-02-03 Thread Vijay Venkatachalam
Hi:

In OpenStack neutron lbaas implementation, when entities are created/updated by 
the user, they might not be associated with the root entity, which is 
loadbalancer.
Since root entity has the driver information, the driver cannot be called by 
lbaas plugin during these operations by user.
Such entities are set in DEFFERED status until the entity is associated with 
root entity.
During this association operation (listener created with pool), the driver api 
is called for the current operation (listener create); and the driver is 
expected to perform the original operation (pool create) along with the current 
operation (listener create).
This leads to complex handling at the driver, I think it will be better for the 
lbaas plugin to call the original operation (pool create) driver API in 
addition to the current operation (listener create) API during the association 
operation.

That is the summary, please read on to understand the situation in detail.

Let's take the example of pool create in driver.


a.   A pool create operation will not translate to a pool create api in the 
driver. There is a pool create in the driver API but that is never called today.

b.  When a listener is created with loadbalancer and pool, the driver's 
listener create api is called and the driver is expected to create both pool 
and listener.

c.   When a listener is first created without loadbalancer but with a pool, 
the call does not reach driver. Later when the listener is updated with 
loadbalancer id,  the drivers listener update  API is called and the driver is 
expected to create both pool and listener.

d.  When a listener configured with pool and loadbalancer is updated with new 
pool id,  the driver's listener update api is called. The driver is expected to 
delete the original pool that was associated, create the new pool and  also 
update the listener

As you can see this is leading to a quite a bit of handling in the driver code. 
This makes driver code complex.

How about handling this logic in lbaas plugin and it can call the natural 
functions that were deferred.

Whenever an entity is going from a DEFERRED to ACTIVE/CREATE status (through 
whichever workflow) the plugin can call the CREATE pool function of the driver.
Whenever an entity is going from an ACTIVE/CREATED to DEFERRED status (through 
whichever workflow) the plugin can call the DELETE pool function of the driver.

Thanks,
Vijay V.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] operators vs users for choosing convergence engine

2015-02-03 Thread Clint Byrum
Excerpts from Angus Salkeld's message of 2015-02-03 02:40:44 -0800:
 On Tue, Feb 3, 2015 at 10:52 AM, Steve Baker sba...@redhat.com wrote:
 
  A spec has been raised to add a config option to allow operators to choose
  whether to use the new convergence engine for stack operations. For some
  context you should read the spec first [1]
 
  Rather than doing this, I would like to propose the following:
  * Users can (optionally) choose which engine to use by specifying an
  engine parameter on stack-create (choice of classic or convergence)
  * Operators can set a config option which determines which engine to use
  if the user makes no explicit choice
  * Heat developers will set the default config option from classic to
  convergence when convergence is deemed sufficiently mature
 
  I realize it is not ideal to expose this kind of internal implementation
  detail to the user, but choosing convergence _will_ result in different
  stack behaviour (such as multiple concurrent update operations) so there is
  an argument for giving the user the choice. Given enough supporting
  documentation they can choose whether convergence might be worth trying for
  a given stack (for example, a large stack which receives frequent updates)
 
  Operators likely won't feel they have enough knowledge to make the call
  that a heat install should be switched to using all convergence, and users
  will never be able to try it until the operators do (or the default
  switches).
 
  Finally, there are also some benefits to heat developers. Creating a whole
  new gate job to test convergence-enabled heat will consume its share of CI
  resource. I'm hoping to make it possible for some of our functional tests
  to run against a number of scenarios/environments. Being able to run tests
  under classic and convergence scenarios in one test run will be a great
  help (for performance profiling too).
 
 
 Hi
 
 I didn't have a good initial response to this, but it's growing on me. One
 issue is the specific option that we expose, it's not nice having
 a dead option once we totally switch over and remove classic. So is it
 worth coming up with a real feature that convergence-phase-1 enables
 and use that (like enable-concurrent-updates). Then we need to think if we
 would actually want to keep that feature around (as in
 once classic is gone is it possible to maintain
 disable-concurrent-update).
 

There are other features of convergence that will be less obvious.
Having stack operations survive a restart of the engines is a pretty big
one that one might have a hard time grasping, but will be appreciated by
users. Also being able to push a bigger stack in will be a large benefit,
though perhaps not one that is realized on day 1.

Anyway, I'd prefer that they just be versioned, and not named. The names
are too implementation specific. A v1 stack will be expected to work
with v1 stack tested templates and parameters for as long as we support
v1 stacks.

A v2 stack will be expected to work similarly, but may act differently,
and thus a user can treat this as another API update that they need to
deal with. The features will be a force multiplier, but the recommendation
of the team by removing the experimental tag will be the primary
motivator. And for operators, when they're comfortable with new stacks all
going to v2 they can enable that as the default. If they trust the Heat
developers, they can just go to v2 as default when the Heat devs say so.

Once we get all of the example templates to work with v2 and write some
new v2-specific stacks, thats the time to write a migration tool and
deprecate v1.

So, to be clear, I'm fully in support of Steve Baker's suggestion to let
the users choose which engine to use. However, I think we should treat
it not as an engine choice, but as an interface choice. The fact that
it takes a whole new engine to support the new features of the interface
is the implementation detail that no end-user needs to care about.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unable to sign ICLA (OpenStack Individual Contributor License Agreement)

2015-02-03 Thread John Villalovos
Thanks.  I will try these solutions.  I think it is because I have not yet
joined the foundation.

John

On Tue, Feb 3, 2015 at 9:40 AM, Abhishek L abhishek.lekshma...@gmail.com
wrote:

 Hi

 On Tue, Feb 3, 2015 at 10:47 PM, John Villalovos
 openstack@sodarock.com wrote:
 
  I have attempted over the last two days to sign the ICLA (
  OpenStack Individual Contributor License Agreement)
 
  Each time I do it I get the following error:
 
  Code Review - Error
  Server Error
  Cannot store contact information

 Have you already created a profile with same email id on openstack
 https://www.openstack.org/profile/
 IIRC I faced the same issue when I registered, however the problems
 seemed to have gone away after creating a profile in openstack

 
 
  Any ideas who I can contact to fix this?
 
  Thanks,
  John
 

 Regards
 Abhishek

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] SQLite support - drop or not?

2015-02-03 Thread Georgy Okrokvertskhov
I think we should switch to clean migration path. We do have production
installations but we can handle initial db uprgade case by case for
customers. It is better to fix this issue now when we have few customers
rather then doing later at larger scale.

Thanks
Georgy

On Tue, Feb 3, 2015 at 9:05 AM, Mike Bayer mba...@redhat.com wrote:



 Andrew Pashkin apash...@mirantis.com wrote:

  Mike Bayer wrote:
  there’s always a naming convention in place; all databases other than
  SQLite produce them on the fly if you don’t specify one.  The purpose
  of the Alembic/SQLAlchemy naming_convention feature is so that you
  have *one* naming convention, rather than N unpredictable conventions.
  I’m not sure if you’re arguing the feature should not be used.  IMHO
  it should definitely be used for an application that is deploying
  cross-database.  Otherwise you have no choice but to hardcode the
  naming conventions of each target database individually in all cases
  that you need to refer to them.
  You can't just bring SA/Alembic naming conventions into the project,
  because they will collide with auto-generated constraint names.

 I was proposing a way to fix this for the murano project which only
 appears to have four migrations so far, but with the assumption that there
 are existing production environments which cannot do a full rebuild.

 
  So you need to hardcode reverese-engineered constrants names into the
  old migrations and then add new migration that renames constraint
  according with naming conventions”.
  OR you need to drop old
  migrations, and create new one with naming conventions - that will
  be backward incompatible, but cleaner.


 My proposal was to essentially do both strategies.   Build out fully clean
 migrations from the start, but also add an additional “conditional”
 migration that will repair a Postgresql / MySQL database that is already at
 the head, and is detected as having the older naming convention.  Because
 openstack does not appear to use offline migrations, this would be doable,
 though not necessarily worth it.

 If Murano can afford to just restart with clean migrations and has no
 production deployments yet which would be disrupted by a full rebuild, then
 sure, just do this.





 
  On 03.02.2015 18:32, Mike Bayer wrote:
  Andrew Pashkin apash...@mirantis.com wrote:
 
  Mike Bayer wrote:
  The patch seems to hardcode the conventions for MySQL and Postgresql.
  The first thought I had was that in order to remove the dependence
  on them here, you’d need to instead simply turn off the
  “naming_convention” in the MetaData if you detect that you’re on one
  of those two databases. That would be a safer idea than trying to
  hardcode these conventions (and would also work for other kinds
  of backends).
  With your solution it is still will be necessary for developers
  to guess constraints names when writing new migrations. And it will
  be even harder, because they will need also to handle case of
  naming conventions”.
 
  there’s always a naming convention in place; all databases other than
 SQLite produce them on the fly if you don’t specify one.  The purpose of
 the Alembic/SQLAlchemy naming_convention feature is so that you have *one*
 naming convention, rather than N unpredictable conventions.   I’m not sure
 if you’re arguing the feature should not be used.  IMHO it should
 definitely be used for an application that is deploying cross-database.
 Otherwise you have no choice but to hardcode the naming conventions of each
 target database individually in all cases that you need to refer to them.
 
 
 
 
  Mike Bayer wrote:
  However, it’s probably worthwhile to introduce a migration that does
  in fact rename existing constraints on MySQL and Postgresql.
  Yes, that's what I want to do in case of the first solution.
 
  Mike Bayer wrote:
  Another possible solution is to drop all current migrations and
  introduce new one with correct names.
  you definitely shouldn’t need to do that.
  Why?
 
  On 30.01.2015 22:00, Mike Bayer wrote:
  Andrew Pashkin apash...@mirantis.com wrote:
 
  Working on this issue I encountered another problem.
 
  Most indices in the project has no names and because of that,
  developer must reverse-engineer them in every migration.
  Read about that also here [1].
 
  SQLAlchemy and Alembic provide feature for generation constraint
  names by pattern, specifically to resolve that kind of issues [1].
 
  I decided to introduce usage of this feature in Murano.
 
  I've implemented solution that preserves backward-compatibility
  for migration and allows to rename all constraints according
  to patterns safely [2]. With it user, that have already deployed
 Murano
  will be able to upgrade to new version of Murano without issues.
 
  There are downsides in this solution:
  - It assumes that all versions of Postgres and MySQL uses the
  same patterns for constraints names generation.
  - It is hard to implement a test for this solution and 

[openstack-dev] Unable to sign ICLA (OpenStack Individual Contributor License Agreement)

2015-02-03 Thread John Villalovos
I have attempted over the last two days to sign the ICLA (
OpenStack Individual Contributor License Agreement)

Each time I do it I get the following error:

Code Review - Error
Server Error
Cannot store contact information


Any ideas who I can contact to fix this?

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unable to sign ICLA (OpenStack Individual Contributor License Agreement)

2015-02-03 Thread Jeremy Stanley
On 2015-02-03 09:17:53 -0800 (-0800), John Villalovos wrote:
 I have attempted over the last two days to sign the ICLA (
 OpenStack Individual Contributor License Agreement)
 
 Each time I do it I get the following error:
 
 Code Review - Error
 Server Error
 Cannot store contact information
 
 Any ideas who I can contact to fix this?

See:

http://ask.openstack.org/question/35200/
http://ask.openstack.org/question/56720/

Hopefully one of those matches your situation. If not, I can
investigate further.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-Library] MVP implementation of Granular Deployment merged into Fuel master branch

2015-02-03 Thread Andrew Woodward
Either we do specs, or we don't. Either every one has to land their specs
before code or no one does. Its that simple.

What should be sorted out? It is unavoidable that people will comment and
 ask questions during development cycle.
 I am not sure that merging spec as early as possible, and than add
 comments and different fixes is good strategy.
 On the other hand we need to eliminate risks.. but how merging spec can
 help?


The spec defining what has been committed already needs to be merged, and
we can open another review to modify the spec into another direction if
necessary.

We can spend several month on polishing the spec, will it help
 to release feature in time? I don't think so.


The spec doesn't have to be perfect, but it needs to be merged prior to
code describing it.

I think the spec should be a synchronization point, where different
 teams can discuss details and make sure that everything is correct.
 The spec should represent the current state of the code which is
 merged and which is going to be merged.


This isn't the intent of the spec, its to document the extent, general
direction, and impact of a feature. As a side effect, well defined specs
can also serve as documentation for the feature. While the discussion is
common on the spec, this should be done on a merged spec.

On Thu, Jan 29, 2015 at 2:45 AM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 +1 to Dmitriy's comment.
 We can spend several month on polishing the spec, will it help
 to release feature in time? I don't think so.
 Also with your suggestion we'll get a lot of patches over 2 thousands
 lines of code, after spec is merged. Huge patches reduce quality,
 because it's too hard to review, also such patches much harder
 to get merged.
 I think the spec should be a synchronization point, where different
 teams can discuss details and make sure that everything is correct.
 The spec should represent the current state of the code which is
 merged and which is going to be merged.

 Thanks,

 On Thu, Jan 29, 2015 at 1:03 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Andrew,
 What should be sorted out? It is unavoidable that people will comment and
 ask questions during development cycle.
 I am not sure that merging spec as early as possible, and than add
 comments and different fixes is good strategy.
 On the other hand we need to eliminate risks.. but how merging spec can
 help?

 On Wed, Jan 28, 2015 at 8:49 PM, Andrew Woodward xar...@gmail.com
 wrote:

 Vova,

 Its great to see so much progress on this, however it appears that we
 have started merging code prior to the spec landing [0] lets get it
 sorted ASAP.

 [0] https://review.openstack.org/#/c/113491/

 On Mon, Jan 19, 2015 at 8:21 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:
  Hi, Fuelers and Stackers
 
  I am glad to announce that we merged initial support for granular
 deployment
  feature which is described here:
 
 
 https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks
 
  This is an important milestone for our overall deployment and
 operations
  architecture as well as it is going to significantly improve our
 testing and
  engineering process.
 
  Starting from now we can start merging code for:
 
 
 https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
 
 https://blueprints.launchpad.net/fuel/+spec/fuel-library-modular-testing
 
  We are still working on documentation and QA stuff, but it should be
 pretty
  simple for you to start trying it out. We would really appreciate your
  feedback.
 
  Existing issues are the following:
 
  1) pre and post deployment hooks are still out of the scope of main
  deployment graph
  2) there is currently only puppet task provider working reliably
  3) no developer published documentation
  4) acyclic graph testing not injected into CI
  5) there is currently no opportunity to execute particular task - only
 the
  whole deployment (code is being reviewed right now)
 
  --
  Yours Faithfully,
  Vladimir Kuklin,
  Fuel Library Tech Lead,
  Mirantis, Inc.
  +7 (495) 640-49-04
  +7 (926) 702-39-68
  Skype kuklinvv
  45bk3, Vorontsovskaya Str.
  Moscow, Russia,
  www.mirantis.com
  www.mirantis.ru
  vkuk...@mirantis.com
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community

 On Mon, Jan 19, 2015 at 8:21 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:
  Hi, Fuelers and Stackers
 
  I am glad to announce that we merged initial support for granular
 deployment
  feature which is described here:
 
 
 https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks
 
  This is an important milestone for our overall deployment and
 operations
  architecture as well as it is going to 

Re: [openstack-dev] Unable to sign ICLA (OpenStack Individual Contributor License Agreement)

2015-02-03 Thread Abhishek L
Hi

On Tue, Feb 3, 2015 at 10:47 PM, John Villalovos
openstack@sodarock.com wrote:

 I have attempted over the last two days to sign the ICLA (
 OpenStack Individual Contributor License Agreement)

 Each time I do it I get the following error:

 Code Review - Error
 Server Error
 Cannot store contact information

Have you already created a profile with same email id on openstack
https://www.openstack.org/profile/
IIRC I faced the same issue when I registered, however the problems
seemed to have gone away after creating a profile in openstack



 Any ideas who I can contact to fix this?

 Thanks,
 John


Regards
Abhishek

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] operators vs users for choosing convergence engine

2015-02-03 Thread Zane Bitter

On 02/02/15 19:52, Steve Baker wrote:

A spec has been raised to add a config option to allow operators to
choose whether to use the new convergence engine for stack operations.
For some context you should read the spec first [1]

Rather than doing this, I would like to propose the following:


I am strongly, strongly opposed to making this part of the API.


* Users can (optionally) choose which engine to use by specifying an
engine parameter on stack-create (choice of classic or convergence)
* Operators can set a config option which determines which engine to use
if the user makes no explicit choice
* Heat developers will set the default config option from classic to
convergence when convergence is deemed sufficiently mature


We'd also need a way for operators to prevent users from enabling 
convergence if they're not ready to support it.



I realize it is not ideal to expose this kind of internal implementation
detail to the user, but choosing convergence _will_ result in different
stack behaviour (such as multiple concurrent update operations) so there
is an argument for giving the user the choice. Given enough supporting
documentation they can choose whether convergence might be worth trying
for a given stack (for example, a large stack which receives frequent
updates)


It's supposed to be a strict improvement; we don't need to ask 
permission. We have made major changes of this type in practically every 
Heat release. When we switched from creating resources serially to 
creating them in parallel in Havana we didn't ask permission. We just 
did it. We when started allowing users to recover from a failed 
operation in Juno we didn't ask permission. We just did it. We don't 
need to ask permission to allow concurrent updates. We can just do it.


The only difference here is that we are being a bit smarter and 
uncoupling our development schedule from the release cycle. There are 15 
other blueprints, essentially all of which have to be complete before 
convergence is usable at all. It won't do *anything at all* until we are 
at least 12 blueprints in. The config option buys us time to land them 
without the risk of something half-finished appearing in the release 
(trunk-chasers will also thank us). It has no other legitimate purpose IMO.


The goal is IN NO WAY to maintain separate code paths in the long term. 
The config option is simply a development strategy to allow us to land 
code without screwing up a release and while maintaining as much test 
coverage as possible.



Operators likely won't feel they have enough knowledge to make the call
that a heat install should be switched to using all convergence, and
users will never be able to try it until the operators do (or the
default switches).


Hardly anyone should have to make a call. We should flip the default as 
soon as all of the blueprints have landed (i.e. as soon as it works at 
all), provided that a release is not imminent. (Realistically, at this 
point I think we have to say the target is to do it as early as in 
Lizard as we can.) That means for those chasing trunk they get it as 
soon as it works at all, and for those using stable releases they get it 
at the next release, just like every other feature we have ever added.


As a bonus, trunk-chasing operators who need to can temporarily delay 
enabling of convergence until a point of their choosing in the release 
cycle by overriding the default. Anybody in that position likely has 
enough knowledge to make the right call for them.


So I believe that all of our stakeholders are catered to by the config 
option: operators  users who want a stable, tested release; 
operator/users who want to experiment on the bleeding edge; and 
operators who chase trunk but whose users require stability.


The only group that benefits from enshrining the choice in the API - 
users who want to experiment with the bleeding edge, but who don't 
control their own OpenStack deployment - doesn't actually exist, and if 
it did then this would be among the least of their problems.



Finally, there are also some benefits to heat developers. Creating a
whole new gate job to test convergence-enabled heat will consume its
share of CI resource. I'm hoping to make it possible for some of our
functional tests to run against a number of scenarios/environments.
Being able to run tests under classic and convergence scenarios in one
test run will be a great help (for performance profiling too).


I think this is the strongest argument in favour. However, I'd like to 
think it would be possible to run the functional tests twice in the 
gate, changing the config and restarting the engine in between.


But if the worst comes to the worst, then although I think it's 
preferable to use one VM for twice as long vs. two VMs for the same 
length of time, I don't think the impact on resource utilisation in the 
gate of choosing one over the other is likely to be huge. And I don't 
see this situation persisting for a long 

Re: [openstack-dev] The API WG mission statement

2015-02-03 Thread Everett Toews
On Feb 3, 2015, at 10:07 AM, michael mccune m...@redhat.com wrote:

 On 02/02/2015 08:58 AM, Chris Dent wrote:
 This is pretty good but I think it leaves unresolved the biggest
 question I've had about this process: What's so great about
 converging the APIs? If we can narrow or clarify that aspect, good
 to go.
 
 +1, really good point
 
 The implication with your statement above is that there is some kind
 of ideal which maps, at least to some extent, across the rather
 diverse set of resources, interactions and transactions that are
 present in the OpenStack ecosystem. It may not be your intent but
 the above sounds like we want all the APIs to be kinda similar in
 feel or when someone is using an OpenStack-related API they'll be
 able to share some knowledge between then with regard to how stuff
 works.
 
 I'm not sure how realistic^Wuseful that is when we're in an
 environment with APIs with such drastically different interactions
 as (to just select three) Swift, Nova and Ceilometer.
 
 even though there are drastically different interactions among the services 
 of openstack, i think there is some value to those apis having a similar feel 
 to them. i always find it to be useful when i can generally infer some of the 
 details about an api by it's general structure/design. imo, the guidelines 
 will help to bake in some of these inferences.

After you’ve built a few clients against many of the OpenStack APIs the 
inconsistencies and, often times, bizarre designs decisions truly begin to 
grate on you and wear you down. One example is not being able to reuse parsing 
code in a client because these inconsistencies and bad design. It leaves you 
with a client that has more code than necessary and is brittle. Developer 
experience can be difficult to quantify and it often times it does come down to 
words like the “feel” of an API.

Regardless of how difficult it is to quantify, we as developers ourselves, know 
the joy of using a consistent and well designed library or set of libraries. 
The same goes for APIs. It’s analogous to UX for UIs. We’re doing DX for APIs. 
If we can make the APIs a joy to use, more users will build more tools on 
OpenStack enabling even more users to build more applications. 

This is a useful and worthwhile endeavor. 

 unfortunately, baking a feel into an api guideline is more of an analog 
 task. so, very difficult to codify... but i can dream =)
 
 
 We've seen this rather clearly in the recent debates about handling
 metadata.
 
 Now, there's nothing in what you say above that actually straight
 out disagrees with my response, but I think there's got to be some
 way we can remove the ambiguity or narrow the focus. The need to
 remove ambiguity is why the discussion of having a mission statement
 came up.
 
 +1
 
 
 I think where we want to focus our attention is:
 
 * strict adherence to correct HTTP
 * proper use of response status codes
 * effective (and correct) use of a media types
 * some guidance on how to deal with change/versioning
 * and _maybe_ a standard for providing actionable error responses
 * setting not standards but guidelines for anything else
 
 really solid starting point, the last point deserves emphasis too. i think we 
 should be very mindful of the idea that these are guidelines not hard 
 standards, but i haven't heard anyone in the meetings referring to them as 
 standards. it seemed like we had consensus about the guidelines part.

It’s early days in the API WG. Coming up with a list like this at the outset 
seems overly restrictive. How does something get on the list? How does 
something get off the list? Whatever the answer, I can see it taking a lot of 
wheel spinning. I prefer to keep things a bit more open early on and let it 
evolve.

I’ll echo Mike’s sentiment that we should be very mindful of the idea that 
these are guidelines not hard standards. H…even that might be a bit 
restrictive. In the Openstack HTTP error codes [1] discussion I’m getting the 
impression that there is a desire to make this a standard. Perhaps we need to 
leave the door open to setting standards in certain cases?

Everett

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-January/055549.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-03 Thread michael mccune

On 02/03/2015 01:49 PM, Everett Toews wrote:

I think where we want to focus our attention is:

* strict adherence to correct HTTP
* proper use of response status codes
* effective (and correct) use of a media types
* some guidance on how to deal with change/versioning
* and _maybe_ a standard for providing actionable error responses
* setting not standards but guidelines for anything else


really solid starting point, the last point deserves emphasis too. i think we should be 
very mindful of the idea that these are guidelines not hard standards, but i haven't 
heard anyone in the meetings referring to them as standards. it seemed like we had 
consensus about the guidelines part.


It’s early days in the API WG. Coming up with a list like this at the outset 
seems overly restrictive. How does something get on the list? How does 
something get off the list? Whatever the answer, I can see it taking a lot of 
wheel spinning. I prefer to keep things a bit more open early on and let it 
evolve.


that's something i hadn't thought about, the process behind a list of 
this sort. i don't mind having this list as a starting point, but i also 
agree with Everett for the need to establish an open and transparent 
working group. i'm also a big fan of the evolutionary growth model for 
this effort.




I’ll echo Mike’s sentiment that we should be very mindful of the idea that 
these are guidelines not hard standards. H…even that might be a bit 
restrictive. In the Openstack HTTP error codes [1] discussion I’m getting the 
impression that there is a desire to make this a standard. Perhaps we need to 
leave the door open to setting standards in certain cases?


i guess this is a point we should address as well, the possibility for a 
long term path towards standards. it's a tough chicken and egg type 
situation, especially given the desire for openness and free growth. i'm 
not sure how we would best flag that standards may someday evolve out of 
the wg, or even if we need to.


mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] operators vs users for choosing convergence engine

2015-02-03 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2015-02-03 10:00:44 -0800:
 On 02/02/15 19:52, Steve Baker wrote:
  A spec has been raised to add a config option to allow operators to
  choose whether to use the new convergence engine for stack operations.
  For some context you should read the spec first [1]
 
  Rather than doing this, I would like to propose the following:
 
 I am strongly, strongly opposed to making this part of the API.
 
  * Users can (optionally) choose which engine to use by specifying an
  engine parameter on stack-create (choice of classic or convergence)
  * Operators can set a config option which determines which engine to use
  if the user makes no explicit choice
  * Heat developers will set the default config option from classic to
  convergence when convergence is deemed sufficiently mature
 
 We'd also need a way for operators to prevent users from enabling 
 convergence if they're not ready to support it.
 

This would be relatively simple to do by simply providing a list of the
supported stack versions.

  I realize it is not ideal to expose this kind of internal implementation
  detail to the user, but choosing convergence _will_ result in different
  stack behaviour (such as multiple concurrent update operations) so there
  is an argument for giving the user the choice. Given enough supporting
  documentation they can choose whether convergence might be worth trying
  for a given stack (for example, a large stack which receives frequent
  updates)
 
 It's supposed to be a strict improvement; we don't need to ask 
 permission. We have made major changes of this type in practically every 
 Heat release. When we switched from creating resources serially to 
 creating them in parallel in Havana we didn't ask permission. We just 
 did it. We when started allowing users to recover from a failed 
 operation in Juno we didn't ask permission. We just did it. We don't 
 need to ask permission to allow concurrent updates. We can just do it.
 

The visible change in making things parallel was minimal. In talking
about convergence, it's become clear that users can and should expect
something radically different when they issue stack updates. I'd love to
say that it can be done to just bind convergence into the old ways, but
doing so would also remove the benefit of having it.

Also allowing resume wasn't a new behavior, it was fixing a bug really
(that state was lost on failed operations). Convergence is a pretty
different beast from the current model, and letting users fall back
to the old one means that when things break they can solve their own
problem while the operator and devs figure it out. The operator may know
what is breaking their side, but they may have very little idea of what
is happening on the end-user's side.

 The only difference here is that we are being a bit smarter and 
 uncoupling our development schedule from the release cycle. There are 15 
 other blueprints, essentially all of which have to be complete before 
 convergence is usable at all. It won't do *anything at all* until we are 
 at least 12 blueprints in. The config option buys us time to land them 
 without the risk of something half-finished appearing in the release 
 (trunk-chasers will also thank us). It has no other legitimate purpose IMO.
 

The config option only really allows an operator to go forward. If
the users start expecting concurrent updates and resiliency, and then
all their stacks are rolled back to the old engine because #reasons,
this puts pressure on the operator. This will make operators delay the
forward progress onto convergence for as long as possible.

I'm also not entirely sure rolling the config option back to the old
setting would even be possible without breaking any in-progress stacks.

 The goal is IN NO WAY to maintain separate code paths in the long term. 
 The config option is simply a development strategy to allow us to land 
 code without screwing up a release and while maintaining as much test 
 coverage as possible.
 

Nobody plans to maintain the Keystone v2 domainless implementation forever
too. But letting users consider domains and other v3 options for a while
means that the ecosystem grows more naturally without giving up ground
to instability. Once the v3 adoption rate is high enough, people will
likely look at removing the old code because nobody uses it. In my
opinion OpenStack has been far too eager to deprecate and remove things
that users rely on, but I do think this will happen and should happen
eventually.

  Operators likely won't feel they have enough knowledge to make the call
  that a heat install should be switched to using all convergence, and
  users will never be able to try it until the operators do (or the
  default switches).
 
 Hardly anyone should have to make a call. We should flip the default as 
 soon as all of the blueprints have landed (i.e. as soon as it works at 
 all), provided that a release is not imminent. (Realistically, at this 
 

Re: [openstack-dev] The API WG mission statement

2015-02-03 Thread Kevin L. Mitchell
On Tue, 2015-02-03 at 18:49 +, Everett Toews wrote:
 I’ll echo Mike’s sentiment that we should be very mindful of the idea
 that these are guidelines not hard standards. H…even that might be
 a bit restrictive. In the Openstack HTTP error codes [1] discussion
 I’m getting the impression that there is a desire to make this a
 standard. Perhaps we need to leave the door open to setting standards
 in certain cases?

I tend to be in the guideline for now camp, but I see us slowly
shifting over to establishing standards where it makes sense.  Once the
error codes discussion truly starts to converge toward consensus
(something I feel it's close to, but not quite there yet), it seems
reasonable to make it a guideline.  As far as standards go—if we go with
the idea of header addition to tell the client about the codes, it
becomes something that can easily be added to all OpenStack APIs, and
once that happens, then I can foresee it becoming a formal standard
recommended by the API WG…
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] noVNC disabled by default?

2015-02-03 Thread Marty Falatic (mfalatic)
I¹ve opened a bug to track this effort for DevStack (and to help point
others in the right direction when they stumble upon this):
https://bugs.launchpad.net/devstack/+bug/1417735

 - Marty


On 1/12/15, 10:02 AM, Ben Nemec openst...@nemebean.com wrote:

On 01/09/2015 05:24 PM, Sean Dague wrote:
 On 01/09/2015 06:12 PM, Solly Ross wrote:
 Hi,
 
 I just noticed that noVNC was disabled by default in devstack (the
relevant 
 change was 
 
 https://review.openstack.org/#/c/140860/).
 

 
 Now, if I understand correctly (based on the short commit message),
the 
 rationale is that we don't want devstack to reply on non-OpenStack Git
 
 repos, so that devstack doesn't fail when some external Git hosting
 
 service (e.g. GitHub) goes down.
 
 Realistically the policy is more about the fact that we should be using
 released (and commonly available) versions of dependent software.
 Ideally from packages, but definitely not from git trees. We don't want
 to be testing everyone else's bleeding edge, there are lots of edges and
 pointy parts in OpenStack as it is.
 

 
 This is all fine and dandy (and a decent idea, IMO), but this leaves
devstack   
 installing a broken installation of Horizon by default -- Horizon
still  
 attempts to show the noVNC console when you go to the console tab
for an  
 instance, which is a bit confusing, initially.  Now, it wasn't
particularly
 hard to track not particularly hard to track down *why* this happened
(hmm...   
 my stackrc seems to be missing n-novnc in ENABLED_SERVICES.
Go-go-gadget
 `git blame`), but it strikes me as a bit inconsistent and
inconvenient.   

 
 Personally, I would like to see noVNC back as a default service, since
it   
 can be useful when trying to see what your VM is actually doing during
 
 boot, or if you're having network issues.  Is there anything I can do
 
 as a noVNC maintainer to help?
 

 
 We (the noVNC team) do publish releases, and I've been trying to make
 
 sure that they happen in a more timely fashion.  In the past, it was
necessary  
 to use Git master to ensure that you got the latest version (there was
a
 2-year gap between 0.4 and 0.5!), but I'm trying to change that.
Currently,
 it would appear that most of the distros are still using the old
version (0.4), 
 but versions 0.5 and 0.5.1 are up on GitHub as release tarballs (0.5
being a 3  
 months old and 0.5.1 having been tagged a couple weeks ago).  I will
attempt to 
 work with distro maintainers to get the packages updated.  However, in
the mean 
 time, is there a place would be acceptable to place the releases so
that devstack
 can install them?
 
 If you rewrite the noNVC installation in devstack to work from a release
 URL that includes the released version on it, I think that would be
 sufficient to turn it back on. Again, ideally this should be in distros,

FWIW, I looked into installing novnc from distro packages quite a while
ago and ran into problems because the dependencies were wonky.  Like,
novnc would pull in Nova which then overwrote a bunch of the devstack
Nova stuff.  I don't know if that's still an issue, but that's the main
reason I never pushed ahead with removing the git install of novnc (that
was during the release drought, so those weren't an option at the time
either).

 but I think we could work on doing release installs until then,
 especially if the install process is crisp.
 
 I am looking at the upstream release tarball right now though, and don't
 see and INSTALL instructions in it. So lets see what the devstack patch
 would look like to do the install.
 
  -Sean
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled

2015-02-03 Thread John Belamaric
Sure, makes sense. The placeholder I was referring to would be communicated to 
the IPAM plugin. Though, if there is no IP, it may be just best not to involve 
the IPAM subsystem.

From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, February 3, 2015 at 4:38 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Padmanabhan Krishnan kpr...@yahoo.commailto:kpr...@yahoo.com
Subject: Re: [openstack-dev] [Neutron] fixed ip info shown for port even when 
dhcp is disabled

If we have ports without IPs, I don't think we need a placeholder, do we? 
Wouldn't a port without an IP address be the same thing as a port with a 
placeholder indicating that it doesn't have an IP address?

On Tue, Feb 3, 2015 at 8:57 AM, John Belamaric 
jbelama...@infoblox.commailto:jbelama...@infoblox.com wrote:
Hi Paddu,

I think this is less an issue of the pluggable IPAM than it is the Neutron 
management layer, which requires an IP for a port, as far as I know. If the 
management layer is updated to allow a port to exist without a known IP, then 
an additional IP request type could be added to represent the placeholder you 
describing.

However, doing so leaves IPAM out of the hands of Neutron and out of the hands 
of the external (presumably authoritative) IPAM system. This could lead to 
duplicate IP issues since each VM is deciding its own IP without any 
centralized coordination. I wouldn't recommend this approach to managing your 
IP space.

John

From: Padmanabhan Krishnan kpr...@yahoo.commailto:kpr...@yahoo.com
Reply-To: Padmanabhan Krishnan kpr...@yahoo.commailto:kpr...@yahoo.com
Date: Wednesday, January 28, 2015 at 4:58 PM
To: John Belamaric jbelama...@infoblox.commailto:jbelama...@infoblox.com, 
OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] fixed ip info shown for port even when 
dhcp is disabled

Some follow up questions on this.

In the specs, i see that during a create_port,  there's provisions to query 
the external source by  Pluggable IPAM to return the IP.
This works fine for cases where the external source (say, DHCP server) can be 
queried for the IP address when a launch happens.

Is there a provision to have the flexibility of a late IP assignment?

I am thinking of cases, like the temporary unavailability of external IP source 
or lack of standard interfaces in which case data packet snooping is used to 
find the IP address of a VM after launch. Something similar to late binding of 
IP addresses.
This means the create_port  may not get the IP address from the pluggable IPAM. 
In that case, launch of a VM (or create_port) shouldn't fail. The Pluggable 
IPAM should have some provision to return something equivalent to unavailable 
during create_port and be able to do an update_port when the IP address becomes 
available.

I don't see that option. Please correct me if I am wrong.

Thanks,
Paddu


On Thursday, December 18, 2014 7:59 AM, Padmanabhan Krishnan 
kpr...@yahoo.commailto:kpr...@yahoo.com wrote:


Hi John,
Thanks for the pointers. I shall take a look and get back.

Regards,
Paddu


On Thursday, December 18, 2014 6:23 AM, John Belamaric 
jbelama...@infoblox.commailto:jbelama...@infoblox.com wrote:


Hi Paddu,

Take a look at what we are working on in Kilo [1] for external IPAM. While this 
does not address DHCP specifically, it does allow you to use an external source 
to allocate the IP that OpenStack uses, which may solve your problem.

Another solution to your question is to invert the logic - you need to take the 
IP allocated by OpenStack and program the DHCP server to provide a fixed IP for 
that MAC.

You may be interested in looking at this Etherpad [2] that Don Kehn put 
together gathering all the various DHCP blueprints and related info, and also 
at this BP [3] for including a DHCP relay so we can utilize external DHCP more 
easily.

[1] https://blueprints.launchpad.net/neutron/+spec/neutron-ipam
[2] https://etherpad.openstack.org/p/neutron-dhcp-org
[3] https://blueprints.launchpad.net/neutron/+spec/dhcp-relay

John

From: Padmanabhan Krishnan kpr...@yahoo.commailto:kpr...@yahoo.com
Reply-To: Padmanabhan Krishnan kpr...@yahoo.commailto:kpr...@yahoo.com, 
OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, December 17, 2014 at 6:06 PM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] fixed ip info shown for port even when 
dhcp is disabled

This means whatever tools the 

[openstack-dev] [congress] sprinting towards Kilo2

2015-02-03 Thread sean roberts
Over the last couple of meetings, we have discussed holding a hackathon
this Thursday and Friday. You each have some code you are working on. Let’s
each pick a 3-4 hour block of time to intensively collaborate. We can use
the #congress IRC channel and google hangout.

Reply to this thread, so we can allocate people’s time.

~ sean
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-03 Thread Chris Dent

On Tue, 3 Feb 2015, Everett Toews wrote:

On Feb 3, 2015, at 10:07 AM, michael mccune m...@redhat.com wrote:

On 02/02/2015 08:58 AM, Chris Dent wrote:

I think where we want to focus our attention is:

* strict adherence to correct HTTP
* proper use of response status codes
* effective (and correct) use of a media types
* some guidance on how to deal with change/versioning
* and _maybe_ a standard for providing actionable error responses
* setting not standards but guidelines for anything else


really solid starting point, the last point deserves emphasis too. i
think we should be very mindful of the idea that these are guidelines
not hard standards, but i haven't heard anyone in the meetings
referring to them as standards. it seemed like we had consensus about
the guidelines part.


It’s early days in the API WG. Coming up with a list like this at
the outset seems overly restrictive. How does something get on the
list? How does something get off the list? Whatever the answer, I can
see it taking a lot of wheel spinning. I prefer to keep things a bit
more open early on and let it evolve.


Interesting. I made that list above because I imagined it to be (and
wanted it to be) less restrictive (that is, more abstract and general)
than many of the proposed guidelines which have come across the api-wg
gerrit radar. We talk quite a bit about the content of request and
response bodies. This suprises me very much in the context of HTTP APIs
where resource representation can be so diverse[1].

Items 2-5 are essentially item 1 restated, modulo some just do what
the rest of the world does.

Item 6 is a way of saying we need to be sure that the _reader_
knows these are guidelines not standards. Within the group we've
certainly agreed they are guidelines but a lot of other people react
otherwise, so it's just something to be clear about.

[1] And we haven't even begun to talk about content negotiation,
which is a shame.
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-03 Thread Dean Troyer
On Tue, Feb 3, 2015 at 1:19 PM, michael mccune m...@redhat.com wrote:

 that's something i hadn't thought about, the process behind a list of this
 sort. i don't mind having this list as a starting point, but i also agree
 with Everett for the need to establish an open and transparent working
 group. i'm also a big fan of the evolutionary growth model for this effort.


+++


 i guess this is a point we should address as well, the possibility for a
 long term path towards standards. it's a tough chicken and egg type
 situation, especially given the desire for openness and free growth. i'm
 not sure how we would best flag that standards may someday evolve out of
 the wg, or even if we need to.


I could see some guidelines becoming standards, it would be an easier
process than starting from scratch since a lot of the arguments have
already been had.  But it is a different set of people setting those
standards and I'm sure an additional round of editing would occur.  But
having a well-reasoned (I hope) starting point with rationale behind it is
huge.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

2015-02-03 Thread Kuvaja, Erno
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 02 February 2015 16:19
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes
 
 On 02/01/2015 06:20 PM, Morgan Fainberg wrote:
  Putting on my sorry-but-it-is-my-job-to-get-in-your-way hat (aka
 security), let's be careful how generous we are with the user and data we
 hand back. It should give enough information to be useful but no more. I
 don't want to see us opened to weird attack vectors because we're exposing
 internal state too generously.
 
  In short let's aim for a slow roll of extra info in, and evaluate each data 
  point
 we expose (about a failure) before we do so. Knowing more about a failure is
 important for our users. Allowing easy access to information that could be
 used to attack / increase impact of a DOS could be bad.
 
  I think we can do it but it is important to not swing the pendulum too far
 the other direction too fast (give too much info all of a sudden).
 
 Security by cloud obscurity?
 
 I agree we should evaluate information sharing with security in mind.
 However, the black boxing level we have today is bad for OpenStack. At a
 certain point once you've added so many belts and suspenders, you can no
 longer walk normally any more.

++
 
 Anyway, lets stop having this discussion in abstract and actually just 
 evaluate
 the cases in question that come up.

++

- Erno
 
   -Sean
 
 --
 Sean Dague
 http://dague.net
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] django-openstack-auth and stable/icehouse

2015-02-03 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/29/2015 08:18 PM, Ryan Hsu wrote:
 Hi All,
 
 There was a change [1] 2 days ago in django-openstack-auth that
 introduces a new requirement oslo.config=1.6.0 to the project,
 which is now present in the 1.1.9 release of django-openstack-auth.
 While this change is in sync with master requirements,
 oslo.config=1.6.0, it does not jive with stable/icehouse
 requirements which is =1.2.0,1.5. Because stable/icehouse horizon
 does not have an upper-bound version requirement for
 django-openstack-auth, it currently takes this 1.1.9 release of
 django-openstack-auth with the conflicting oslo.config requirement.
 I have a bug open for this situation here [2].
 
 My first thought was to create a patch [3] to cap the
 django-openstack-auth version in stable/icehouse requirements,
 however, a reviewer pointed out that django-openstack-auth 1.1.8
 has a security fix that would be desired. My other thought was to
 decrease the minimum required version in django-openstack-auth to
 equal that of stable/icehouse requirements but this would then
 conflict with master requirements. Does anyone have thoughts on how
 to best resolve this?

I personally don't believe we should be responsible for fetching all
security fixes in external libraries that don't maintain stable
branches and hence just break their consumers. In ideal world,
django-openstack-auth would have a stable branch where the security
fix would be backported.

But since the library does not follow best practices, I think we
should just cap it at whatever version is compatible with other
requirements, and allow deployers to locally patch their
django-openstack-auth with security fixes.

Bumping minimal oslo.config version due to the issue in
django-openstack-auth seems like a wrong way to do it.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU0KiPAAoJEC5aWaUY1u57uE0IAMrK8iupadmoE7c9gkO6otK/
JiccHV/O0Ov7pZY16NG20G8lkzapE2MWx4X3IYdc5Dxc4N7fBqUUpSwmEmWWbf5K
NWrUYGkWQc7jvScsEg0Xb2qChQjrI0DupRZcfzm19ymqqO325WuEcoLU13YVigFT
sin4BGwd6xk5G4dzRagXfo6sxGWdjd6/px7TEHeevTQ0sPH4mbyNgNn05qUqB69z
+wQN2tZ2hecoY1ouxa3ThOcS+iiiyvGtiA3b9+QRFgp4vdgmV8SwPUE8bb5MvEen
Gkei1K1zH6YI1Dgw9YWKeZuURUAnpTCfGwcP8cqGdOUDGDHtoD/aci9HWk8Y4UQ=
=UAk1
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] ironic-discoverd 1.0.0 released

2015-02-03 Thread Dmitry Tantsur

Hi!

A bit earlier than expected due to personal circumstances I'm announcing 
first stable release of ironic-discoverd: 1.0.0 [1]. It contains 
implementations of 6 blueprints and fixes for 17 bugs. Full release 
notes can be found at [2], here is the summary:

* Redesigned API, including endpoint to get introspection status
* Better error handling, including proper time out
* Support for plugins hooking into data processing chain
* Support for K state machine

In addition to PyPI, new RPM is built in Fedora rawhide, and is expected 
to be provided via Juno RDO in the near future.


Please report bugs on launchpad [3].
Next feature release is planned before Kilo RC, feel free to submit 
ideas: [4].


Cheers,
Dmitry

[1] https://pypi.python.org/pypi/ironic-discoverd/1.0.0
[2] https://github.com/stackforge/ironic-discoverd#10-series
[3] https://bugs.launchpad.net/ironic-discoverd
[4] https://launchpad.net/ironic-discoverd/+milestone/1.1.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Scheduler sub-group meeting agenda 2/3

2015-02-03 Thread Sylvain Bauza


Le 03/02/2015 06:08, Dugger, Donald D a écrit :


Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)

1)Remove direct nova DB/API access by Scheduler Filters - 
https://review.opernstack.org/138444/ 
https://review.opernstack.org/138444/


2)Status on cleanup work - https://wiki.openstack.org/wiki/Gantt/kilo 
https://wiki.openstack.org/wiki/Gantt/kilo



I want to add :
3) Detach service from ComputeNode - Ironic usecase problem

-Sylvain


--

Don Dugger

Censeo Toto nos in Kansa esse decisse. - D. Gale

Ph: 303/443-3786



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Glance REST API v2 Create new Image from external store

2015-02-03 Thread Vinayak Shinde
Hi all,

Glance v1 APIs provides 'x-glance-api-copy-from' and
'x-image-meta-location' headers that can be used if the image contents has
to be served out from external store.
On the same lines, what is the parameter/header that provides same
functionality
in Glance v2 API ?

(Could not found the parameter in documentation at
http://developer.openstack.org/api-ref-image-v2.html)

Thanks in advance.
-- 
Thanks,
Vinayak Shinde.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] operators vs users for choosing convergence engine

2015-02-03 Thread Angus Salkeld
On Tue, Feb 3, 2015 at 10:52 AM, Steve Baker sba...@redhat.com wrote:

 A spec has been raised to add a config option to allow operators to choose
 whether to use the new convergence engine for stack operations. For some
 context you should read the spec first [1]

 Rather than doing this, I would like to propose the following:
 * Users can (optionally) choose which engine to use by specifying an
 engine parameter on stack-create (choice of classic or convergence)
 * Operators can set a config option which determines which engine to use
 if the user makes no explicit choice
 * Heat developers will set the default config option from classic to
 convergence when convergence is deemed sufficiently mature

 I realize it is not ideal to expose this kind of internal implementation
 detail to the user, but choosing convergence _will_ result in different
 stack behaviour (such as multiple concurrent update operations) so there is
 an argument for giving the user the choice. Given enough supporting
 documentation they can choose whether convergence might be worth trying for
 a given stack (for example, a large stack which receives frequent updates)

 Operators likely won't feel they have enough knowledge to make the call
 that a heat install should be switched to using all convergence, and users
 will never be able to try it until the operators do (or the default
 switches).

 Finally, there are also some benefits to heat developers. Creating a whole
 new gate job to test convergence-enabled heat will consume its share of CI
 resource. I'm hoping to make it possible for some of our functional tests
 to run against a number of scenarios/environments. Being able to run tests
 under classic and convergence scenarios in one test run will be a great
 help (for performance profiling too).


Hi

I didn't have a good initial response to this, but it's growing on me. One
issue is the specific option that we expose, it's not nice having
a dead option once we totally switch over and remove classic. So is it
worth coming up with a real feature that convergence-phase-1 enables
and use that (like enable-concurrent-updates). Then we need to think if we
would actually want to keep that feature around (as in
once classic is gone is it possible to maintain
disable-concurrent-update).

Regards
Angus



 If there is enough agreement then I'm fine with taking over and updating
 the convergence-config-option spec.

 [1] https://review.openstack.org/#/c/152301/2/specs/kilo/
 convergence-config-option.rst

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-03 Thread Kevin Benton
So do we just use whatever name we want instead? Can we use 'referrer'? ;-)

On Tue, Feb 3, 2015 at 5:43 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 02/02/2015 09:07 PM, Everett Toews wrote:

 On Feb 2, 2015, at 7:24 PM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:

  On 02/02/2015 05:35 PM, Jay Pipes wrote:

 On 01/29/2015 12:41 PM, Sean Dague wrote:

 Correct. This actually came up at the Nova mid cycle in a side
 conversation with Ironic and Neutron folks.

 HTTP error codes are not sufficiently granular to describe what happens
 when a REST service goes wrong, especially if it goes wrong in a way
 that would let the client do something other than blindly try the same
 request, or fail.

 Having a standard json error payload would be really nice.

 {
  fault: ComputeFeatureUnsupportedOnInstanceType,
  messsage: This compute feature is not supported on this kind of
 instance type. If you need this feature please use a different instance
 type. See your cloud provider for options.
 }

 That would let us surface more specific errors.

 snip


 Standardization here from the API WG would be really great.


 What about having a separate HTTP header that indicates the OpenStack
 Error Code, along with a generated URI for finding more information
 about the error?

 Something like:

 X-OpenStack-Error-Code: 1234
 X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234

 That way is completely backwards compatible (since we wouldn't be
 changing response payloads) and we could handle i18n entirely via the
 HTTP help service running on errors.openstack.org
 http://errors.openstack.org.


 That could definitely be implemented in the short term, but if we're
 talking about API WG long term evolution, I'm not sure why a standard
 error payload body wouldn't be better.


 Agreed. And using the “X-“ prefix in headers has been deprecated for
 over 2 years now [1]. I don’t think we should be using it for new things.

 Everett

 [1] https://tools.ietf.org/html/rfc6648


 Ha! Good to know about the X- stuff :) Duly noted!


 -jay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ec2-api] Tagging functionality in nova's EC2 API

2015-02-03 Thread Michael Still
I agree that the priority at the moment for Nova should be getting our
EC2 implementation working well enough that it buys us time to
transition people to whatever the future may be.

Its unfortunate that we have approved the spec for this work in Kilo,
but I think that's an indication that we didn't realize that boto
would require new forms of auth in this release. I'm happy to discuss
this at the next nova meeting if people feel that's needed.

Michael

On Wed, Feb 4, 2015 at 3:02 AM, Alexandre Levine
alev...@cloudscaling.com wrote:
 I'm writing this in regard to several reviews concering tagging
 functionality for EC2 API in nova.
 The list of the reviews concerned is here:

 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/ec2-volume-and-snapshot-tags,n,z

 I don't think it's a good idea to merge these reviews. The analysis is
 below:

 Tagging in AWS

 Main goal for the tagging functionality in AWS is to be able to efficiently
 distinguish various resources based on user-defined criteria:

 Tags enable you to categorize your AWS resources in different ways, for
 example, by purpose, owner, or environment.
 ...
 You can search and filter the resources based on the tags you add.

 (quoted from here:
 http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html)

 It means that one of the two main use-cases is to be able to use Tags as
 filter when you describe something. Another one is to be able to get
 information about particular tag with all of the resources tagged by it.
 Also there is a constraint:

 You can tag public or shared resources, but the tags you assign are
 available only to your AWS account and not to the other accounts sharing the
 resource.

 The important part here is shared resources which are visible to different
 users but tags are not shared - each user sees his own.

 Existing implementation in nova

 Existing implementation of tags in nova's EC2 API covers only instances. But
 it does so in both areas:
 1. Tags management (create, delete, describe,...)
 2. Instances filtering (describe_instances with filtering by tags).
 The implementation is based on storing tags in each instance's metadata. And
 nova DB sqlalchemy level uses tag: in queries to allow instances
 describing with tag filters.

 I see the following design flaws in existing implementation:

 1. It uses instance's own metadata for storing information about assigned
 tags.
 Problems:
 - it doesn't scale when you want to start using tags for other resources.
 Following this design decision you'll have to store tags in other resources
 metadata, which mean different services APIs and other databases. So
 performance for searching for tags or tagged resources in main use cases
 should suffer. You'll have to search through several remote APIs, querying
 different metadatas to collect all info and then to compile the result.
 - instances are not shared resources, but images are. It means that, when
 developed, metadata for images will have to store different tags for
 different users somehow.

 2. EC2-specific code (tag: searching in novaDB sqlalchemy) leaked into
 lower layers of nova.
 - layering is violated. There should be no EC2-specifics below EC2 API
 library in nova, ideally.
 - each other service will have to implement the same solution in its own DB
 level to support tagging for EC2 API.

 Proposed review changes

 The review in question introduces tagging for volumes and snapshots. It
 follows design decisions of existing instance tagging implementation, but
 realizes only one of the two use cases. It provides create, delete,
 describe for tags. But it doesn't provide describe_volumes or
 describe_snapshots for filtering.

 It suffers from the design flaws I listed above. It has to query remote API
 (cinder) for metadata. It didn't implement filtering by tag: in cinder DB
 level so we don't see implementation of describe_volumes with tags
 filtering.

 Current stackforge/ec2-api tagging implementation

 In comparison, the implementation of tagging in stackforge/ec2-api, stores
 all of the tags and their links to resources and users in a separate place.
 So we can efficiently list tags and its resources or filter by tags during
 describing of some of the resources. Also user-specific tagging is
 supported.

 Conclusion

 Keeping in mind all of the above, and seeing your discussion about
 deprecation of EC2 API in nova, I don't feel it's a good time to add such a
 half-baked code with some potential problems into nova.
 I think it's better to concentrate on cleaning up, fixing, reviving and
 making bullet-proof whatever functionality is currently present in nova for
 EC2 and used by clients.

 Best regards,
   Alex Levine


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-02-03 Thread Robert Collins
On 3 February 2015 at 00:48, Kevin Benton blak...@gmail.com wrote:
The only thing this discussion has convinced me of is that allowing users
 to change the fixed IP address on a neutron port leads to a bad
 user-experience.
...

Documenting a VM reboot is necessary, or even deprecating this (you won't
 like that) are sounding better to me by the minute.

 If this is an approach you really want to go with, then we should at least
 be consistent and deprecate the extra dhcp options extension (or at least
 the ability to update ports' dhcp options). Updating subnet attributes like
 gateway_ip, dns_nameserves, and host_routes should be thrown out as well.
 All of these things depend on the DHCP server to deliver updated information
 and are hindered by renewal times. Why discriminate against IP updates on a
 port? A failure to receive many of those other types of changes could result
 in just as severe of a connection disruption.

So the reason we added the extra dhcp options extension was to support
PXE booting physical machines for Nova baremetal, and then Ironic. It
wasn't added for end users to use on the port, but as a generic way of
supporting the specific PXE options needed - and that was done that
way after discussing w/Neutron devs.

We update ports for two reasons. Primarily, Ironic is HA and will move
the TFTPd that boots are happening from if an Ironic node has failed.
Secondly, because a non uncommon operation on physical machines is to
replace broken NICs, and forcing a redeploy seemed unreasonable. The
former case doesn't affect running nodes since its only consulted on
reboot. The second case is by definition only possible when the NIC in
question is offline (whether hotplug hardware or not).

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-02-03 Thread Kevin Benton
 If you had created a second network and subnet this would have been
dropped (different broadcast domain).

Well that update wouldn't have been allowed at the API. You can't use a
fixed IP from a subnet on a network that your port isn't attached to.
Changing a neutron port to a different network is not what we are talking
about here.

 I said that's a bad design because other things can cause it to go
offline, for example:

Yet people do it anyway, which is why I referenced the EC2 example. People
can deal with outages caused by unexpected failures. The outage we are
talking about is part of a normal API call and it doesn't make any sense to
the user.

 If it takes 10 minutes for them to re-create their instance elsewhere
that cannot be blamed on neutron, even if it was our API call that caused
it to go offline.

The outage can still be blamed on Neutron. What you are implying here is
that instead of improving the usability of Neutron, we just give up and
tell users that they should have known better. I don't like supporting a
project with that kind of approach to usability. It leads to unhappy users
and it reflects poorly on the quality of the project.

The difference in a port IP change API call is that it requires action on
the VMs part that neutron can't trigger immediately.

We know why these are different because we understand how Neutron works
internally, but there is no reason to think that a user would know why
these are different. From a user's perspective, one API call to change an
IP (floating IP) works as expected, the other has a huge variable delay
(port IP).

How is warning the user about this a bad thing?

We can and should make a note of this behavior, but it's not enough IMO.
Users don't read the documentation for these kind of things until they hit
an issue. We can update the Neutron server to return the DHCP interval to
the Neutron client and update the client to output these warnings, but it's
still a bit late at that point since we are telling the user, You just
broke your VM for 0-$(1/2 dhcp lease) hours. If you need it sooner,
hopefully you have console access or are fine with a forced restart.

There is no delay in the API call here, the port was updated just as the
user requested.

I never said there was a delay in the API call. I am talking about how long
it takes for that to take effect on the data plane. For it to take full
effect, the VMs need to get the information from the DHCP server. The long
default lease we have now means they won't get the information for hours on
average, which is the long delay I am referring to.


And adding a DHCP option to tell them to renew more frequently doesn't fix
the problem, it only lessens it to ~(interval/2) - that might not be
acceptable to users and they need to know the danger.

In the very first email in this thread, I pointed out that this is only
reducing the time. I don't think that was ever up for debate. The danger
exists already and warning them with whatever mechanism you had in mind
is orthogonal to my proposal to reduce the downtime.

This is the one point I've been trying to get across in this whole
discussion - these are advanced options that users need to take caution
with, neutron can only do so much.

Neutron is completely responsible for the management of the DHCP server in
this case. We have a lot of room for improvement here. I don't think we
should throw in the towel yet.

On Tue, Feb 3, 2015 at 8:53 AM, Brian Haley brian.ha...@hp.com wrote:

 On 02/03/2015 05:10 AM, Kevin Benton wrote:
 The unicast DHCP will make it to the wire, but if you've renumbered the
  subnet either a) the DHCP server won't respond because it's IP has
 changed as
  well; or b) the DHCP server won't respond because there is no mapping
 for the VM
  on it's old subnet.
 
  We aren't changing the DHCP server's IP here. The process that I saw was
 to add
  a subnet and start moving VMs over. It's not 'b' either, because the
 server
  generates a DHCPNAK in response and which will immediately cause the
 client to
  release/renew. I have verified this behavior already and recorded a
 packet
  capture for you.[1]
 
  In the capture, the renewal value is 4 seconds. I captured one renewal
 before
  the IP address change from 99.99.99.5 to 10.0.0.25 took place. You can
 see on
  the next renewal, the DHCP server immediately generates a NACK. The
 client then
  releases its address, requests a new one, assigns it and ACKs within a
 couple of
  seconds.

 Thanks for the trace.  So one thing I noticed is that this unicast DHCP
 only got
 to the server since you created a second subnet on this network (dest MAC
 of
 packet was that of same router interface).  If you had created a second
 network
 and subnet this would have been dropped (different broadcast domain).
 These
 little differences are things users need to know because they lead to heads
 banging on desks :(

 This would happen if the AZ their VM was in went offline as well, at
 which
  point 

[openstack-dev] [neutron] [lbaas] LBaaS Haproxy performance benchmarking

2015-02-03 Thread Varun Lodaya
Hi,

We were trying to use haproxy as our LBaaS solution on the overlay. Has anybody 
done some baseline benchmarking with LBaaSv1 haproxy solution?

Also, any recommended tools which we could use to do that?

Thanks,
Varun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Designate]some qestions about Designate

2015-02-03 Thread wujiangtaoh...@163.com
Hi , I have some qestions about the project of Designate.

1、Can Designate be used with openstack icehouse ?  how about Juno or kilo ?
2、I have tried to  deploy Designate using devstack of master branch. but only 
PowerDNS are supported. Can bind9 be supported ?
3、when deploy designate using devstack, there are some problems: a) i can't 
delete a domain  b) the operating of Designate doesn't  be reflected in PowerDNS
can anyone help me?  for some references  ?  



gentle wu
ChinaMobile (suzhou) software technology ltd .
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] operators vs users for choosing convergence engine

2015-02-03 Thread Pavlo Shchelokovskyy
+1, that would ease the development and also drive adoption IMO, as people
could start using/experimenting with it earlier, and more eyes == less
bugs. You can never predict all the ways how users would use and abuse your
new shiny feature :)

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com

On Tue, Feb 3, 2015 at 6:48 AM, Robert Collins robe...@robertcollins.net
wrote:

 I think incremental adoption is a great principle to have and this
 will enable that.

 So +1

 -Rob

 On 3 February 2015 at 13:52, Steve Baker sba...@redhat.com wrote:
  A spec has been raised to add a config option to allow operators to
 choose
  whether to use the new convergence engine for stack operations. For some
  context you should read the spec first [1]
 
  Rather than doing this, I would like to propose the following:
  * Users can (optionally) choose which engine to use by specifying an
 engine
  parameter on stack-create (choice of classic or convergence)
  * Operators can set a config option which determines which engine to use
 if
  the user makes no explicit choice
  * Heat developers will set the default config option from classic to
  convergence when convergence is deemed sufficiently mature
 
  I realize it is not ideal to expose this kind of internal implementation
  detail to the user, but choosing convergence _will_ result in different
  stack behaviour (such as multiple concurrent update operations) so there
 is
  an argument for giving the user the choice. Given enough supporting
  documentation they can choose whether convergence might be worth trying
 for
  a given stack (for example, a large stack which receives frequent
 updates)
 
  Operators likely won't feel they have enough knowledge to make the call
 that
  a heat install should be switched to using all convergence, and users
 will
  never be able to try it until the operators do (or the default switches).
 
  Finally, there are also some benefits to heat developers. Creating a
 whole
  new gate job to test convergence-enabled heat will consume its share of
 CI
  resource. I'm hoping to make it possible for some of our functional
 tests to
  run against a number of scenarios/environments. Being able to run tests
  under classic and convergence scenarios in one test run will be a great
 help
  (for performance profiling too).
 
  If there is enough agreement then I'm fine with taking over and updating
 the
  convergence-config-option spec.
 
  [1]
 
 https://review.openstack.org/#/c/152301/2/specs/kilo/convergence-config-option.rst
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

2015-02-03 Thread Kuvaja, Erno
Now in my understanding our services does not log to user. The user gets 
whatever error message/exception it happens to be thrown at. This is exactly 
Why we need some common identifier between them (and who ever offers request ID 
being that, I can get some of my friends with well broken English calling you 
and trying to give that to you over phone ;) ).

More inline.

 -Original Message-
 From: Rochelle Grober [mailto:rochelle.gro...@huawei.com]
 Sent: 02 February 2015 21:34
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes
 
 What I see in this conversation is that we are talking about multiple 
 different
 user classes.
 
 Infra-operator needs as much info as possible, so if it is a vendor driver 
 that is
 erring out, the dev-ops can see it in the log.

NO! Absolutely not. This is where we need to be careful what we classify as 
DEBUG and what INFO+ as the ops definitely do not need nor want it all. 
 
 Tenant-operator is a totally different class of user.  These guys need VM
 based logs and virtual network based logs, etc., but should never see as far
 under the covers as the infra-ops *has* to see.

They see pretty much just the error messages raised to them, not the cloud 
infra logs anyways. What we need to do is to be more helpful towards them what 
they should and can help themselves with and where they would need ops help.
 
 So, sounds like a security policy issue of what makes it to tenant logs and
 what stays in the data center thing.

Logs should never contain sensitive information (URIs, credentials, etc.) 
regardless where they are stored. Again obscurity is not security either.
 
 There are *lots* of logs that are being generated.  It sounds like we need
 standards on what goes into which logs along with error codes,
 logging/reporting levels, criticality, etc.

We need guidelines. Now it's really hard to come by with tight rules how things 
needs to be logged as backend failure can be critical for some services while 
others might not care too much about that. (For example if swift has disk down, 
it's not catastrophic failure, they just move to next copy. But if back end 
store is down for glance, we can do pretty much nothing. Now should these two 
back end store failures be logged same way, no they should not.)

We need to keep the decision in the projects as mostly they are the only ones 
knowing how specific error condition affects the service. Also if the rules 
does not fit, it's really difficult to enforce for those, so let's not pick 
that fight.

- Erno
 
 --Rocky
 
 (bcc'ing the ops list so they can join this discussion, here)
 
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: Monday, February 02, 2015 8:19 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes
 
 On 02/01/2015 06:20 PM, Morgan Fainberg wrote:
  Putting on my sorry-but-it-is-my-job-to-get-in-your-way hat (aka
 security), let's be careful how generous we are with the user and data we
 hand back. It should give enough information to be useful but no more. I
 don't want to see us opened to weird attack vectors because we're exposing
 internal state too generously.
 
  In short let's aim for a slow roll of extra info in, and evaluate each data 
  point
 we expose (about a failure) before we do so. Knowing more about a failure is
 important for our users. Allowing easy access to information that could be
 used to attack / increase impact of a DOS could be bad.
 
  I think we can do it but it is important to not swing the pendulum too far
 the other direction too fast (give too much info all of a sudden).
 
 Security by cloud obscurity?
 
 I agree we should evaluate information sharing with security in mind.
 However, the black boxing level we have today is bad for OpenStack. At a
 certain point once you've added so many belts and suspenders, you can no
 longer walk normally any more.
 
 Anyway, lets stop having this discussion in abstract and actually just 
 evaluate
 the cases in question that come up.
 
   -Sean
 
 --
 Sean Dague
 http://dague.net
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Designate]some qestions about Designate

2015-02-03 Thread 严超
I guess PowerDNS and bind are supported. This link may help you:
http://docs.openstack.org/developer/designate/getting-started.html

*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*

2015-02-03 17:38 GMT+08:00 wujiangtaoh...@163.com wujiangtaoh...@163.com:

 Hi , I have some qestions about the project of Designate.

 1、Can Designate be used with openstack icehouse ?  how about Juno or kilo
 ?
 2、I have tried to  deploy Designate using devstack of master branch. but
 only PowerDNS are supported. Can bind9 be supported ?
 3、when deploy designate using devstack, there are some problems: a) i
 can't delete a domain  b) the operating of Designate doesn't  be reflected
 in PowerDNS
 can anyone help me?  for some references  ?

 --
 gentle wu
 ChinaMobile (suzhou) software technology ltd .

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-02-03 Thread Kevin Benton
The unicast DHCP will make it to the wire, but if you've renumbered the
subnet either a) the DHCP server won't respond because it's IP has changed
as well; or b) the DHCP server won't respond because there is no mapping
for the VM on it's old subnet.

We aren't changing the DHCP server's IP here. The process that I saw was to
add a subnet and start moving VMs over. It's not 'b' either, because the
server generates a DHCPNAK in response and which will immediately cause the
client to release/renew. I have verified this behavior already and recorded
a packet capture for you.[1]

In the capture, the renewal value is 4 seconds. I captured one renewal
before the IP address change from 99.99.99.5 to 10.0.0.25 took place. You
can see on the next renewal, the DHCP server immediately generates a NACK.
The client then releases its address, requests a new one, assigns it and
ACKs within a couple of seconds.

This would happen if the AZ their VM was in went offline as well, at which
point they would change their design to be more cloud-aware than it was.
Let's not heap all the blame on neutron - the user is tasked with vetting
that their decisions meet the requirements they desire by thoroughly
testing it.

An availability zone going offline is not the same as an API operation that
takes a day to apply. In an internal cloud, maintenance for AZs can be
advertised and planned around by tenants running single-AZ services. Even
if you want to reference a public cloud, look how much of the Internet
breaks when Amazon's us-east-1a or us-east-1d AZs have issues. Even though
people are supposed to be bringing cattle to the cloud, a huge portion
already have pets that they are attached to or that they can't convert into
cattle.

If our floating IP 'associate' action took 12 hours to take effect on a
running instance, would telling users to reboot their instances to apply
floating IPs faster be okay? I would certainly heap the blame on Neutron
there.


How about a big (*) next to all the things that could cause issues?  :)

You want to put it next to all of the API calls to put the burden on the
users. I want to put it next to the DHCP renewal interval in the config
files to put the burden on the operators. :)

(*) Increasing this value will increase the delay between API calls and
when they take effect on the data plane for any that depend on DHCP to
relay the information. (e.g. port IP/subnet changes, port dhcp option
changes, subnet gateways, subnet routes, subnet DNS servers, etc)

1. http://paste.openstack.org/show/166048/


On Mon, Feb 2, 2015 at 8:57 AM, Brian Haley brian.ha...@hp.com wrote:

 Kevin,

 I think we are finally converging.  One of the points I've been trying to
 make
 is that users are playing with fire when they start playing with some of
 these
 port attributes, and given the tool we have to work with (DHCP), the
 instantiation of these changes cannot be made seamlessly to a VM.  That's
 life
 in the cloud, and most of these things can (and should) be designed around.

 On 02/02/2015 06:48 AM, Kevin Benton wrote:
  The only thing this discussion has convinced me of is that allowing
 users
  to change the fixed IP address on a neutron port leads to a bad
  user-experience.
 
  Not as bad as having to delete a port and create another one on the same
  network just to change addresses though...
 
  Even with an 8-minute renew time you're talking up to a 7-minute
 blackout
  (87.5% of lease time before using broadcast).
 
  I suggested 240 seconds renewal time, which is up to 4 minutes of
  connectivity outage. This doesn't have anything to do with lease time and
  unicast DHCP will work because the spoof rules allow DHCP client traffic
  before restricting to specific IPs.

 The unicast DHCP will make it to the wire, but if you've renumbered the
 subnet
 either a) the DHCP server won't respond because it's IP has changed as
 well; or
 b) the DHCP server won't respond because there is no mapping for the VM on
 it's
 old subnet.

  Most would have rebooted long before then, true?  Cattle not pets,
 right?
 
  Only in an ideal world that I haven't encountered with customer
 deployments.
  Many enterprise deployments end up bringing pets along where reboots
 aren't
  always free. The time taken to relaunch programs and restore state can
 end
  up being 10 minutes+ if it's something like a VDI deployment or dev
  environment where someone spends a lot of time working on one VM.

 This would happen if the AZ their VM was in went offline as well, at which
 point
 they would change their design to be more cloud-aware than it was.  Let's
 not
 heap all the blame on neutron - the user is tasked with vetting that their
 decisions meet the requirements they desire by thoroughly testing it.

  Changing the lease time is just papering-over the real bug - neutron
  doesn't support seamless changes in IP addresses on ports, since it
 totally
  relies on the dhcp configuration settings a deployer has chosen.
 
  It doesn't 

Re: [openstack-dev] [Fuel][Fuel-Library] MVP implementation of Granular Deployment merged into Fuel master branch

2015-02-03 Thread Andrey Danin
I totally agree with Andrew.

On Tuesday, February 3, 2015, Andrew Woodward xar...@gmail.com wrote:

 Either we do specs, or we don't. Either every one has to land their specs
 before code or no one does. Its that simple.

 What should be sorted out? It is unavoidable that people will comment and
 ask questions during development cycle.
 I am not sure that merging spec as early as possible, and than add
 comments and different fixes is good strategy.
 On the other hand we need to eliminate risks.. but how merging spec can
 help?


 The spec defining what has been committed already needs to be merged, and
 we can open another review to modify the spec into another direction if
 necessary.

 We can spend several month on polishing the spec, will it help
 to release feature in time? I don't think so.


 The spec doesn't have to be perfect, but it needs to be merged prior to
 code describing it.

 I think the spec should be a synchronization point, where different
 teams can discuss details and make sure that everything is correct.
 The spec should represent the current state of the code which is
 merged and which is going to be merged.


 This isn't the intent of the spec, its to document the extent, general
 direction, and impact of a feature. As a side effect, well defined specs
 can also serve as documentation for the feature. While the discussion is
 common on the spec, this should be done on a merged spec.

 On Thu, Jan 29, 2015 at 2:45 AM, Evgeniy L e...@mirantis.com
 javascript:_e(%7B%7D,'cvml','e...@mirantis.com'); wrote:

 Hi,

 +1 to Dmitriy's comment.
 We can spend several month on polishing the spec, will it help
 to release feature in time? I don't think so.
 Also with your suggestion we'll get a lot of patches over 2 thousands
 lines of code, after spec is merged. Huge patches reduce quality,
 because it's too hard to review, also such patches much harder
 to get merged.
 I think the spec should be a synchronization point, where different
 teams can discuss details and make sure that everything is correct.
 The spec should represent the current state of the code which is
 merged and which is going to be merged.

 Thanks,

 On Thu, Jan 29, 2015 at 1:03 AM, Dmitriy Shulyak dshul...@mirantis.com
 javascript:_e(%7B%7D,'cvml','dshul...@mirantis.com'); wrote:

 Andrew,
 What should be sorted out? It is unavoidable that people will comment
 and ask questions during development cycle.
 I am not sure that merging spec as early as possible, and than add
 comments and different fixes is good strategy.
 On the other hand we need to eliminate risks.. but how merging spec can
 help?

 On Wed, Jan 28, 2015 at 8:49 PM, Andrew Woodward xar...@gmail.com
 javascript:_e(%7B%7D,'cvml','xar...@gmail.com'); wrote:

 Vova,

 Its great to see so much progress on this, however it appears that we
 have started merging code prior to the spec landing [0] lets get it
 sorted ASAP.

 [0] https://review.openstack.org/#/c/113491/

 On Mon, Jan 19, 2015 at 8:21 AM, Vladimir Kuklin vkuk...@mirantis.com
 javascript:_e(%7B%7D,'cvml','vkuk...@mirantis.com'); wrote:
  Hi, Fuelers and Stackers
 
  I am glad to announce that we merged initial support for granular
 deployment
  feature which is described here:
 
 
 https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks
 
  This is an important milestone for our overall deployment and
 operations
  architecture as well as it is going to significantly improve our
 testing and
  engineering process.
 
  Starting from now we can start merging code for:
 
 
 https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
 
 https://blueprints.launchpad.net/fuel/+spec/fuel-library-modular-testing
 
  We are still working on documentation and QA stuff, but it should be
 pretty
  simple for you to start trying it out. We would really appreciate your
  feedback.
 
  Existing issues are the following:
 
  1) pre and post deployment hooks are still out of the scope of main
  deployment graph
  2) there is currently only puppet task provider working reliably
  3) no developer published documentation
  4) acyclic graph testing not injected into CI
  5) there is currently no opportunity to execute particular task -
 only the
  whole deployment (code is being reviewed right now)
 
  --
  Yours Faithfully,
  Vladimir Kuklin,
  Fuel Library Tech Lead,
  Mirantis, Inc.
  +7 (495) 640-49-04
  +7 (926) 702-39-68
  Skype kuklinvv
  45bk3, Vorontsovskaya Str.
  Moscow, Russia,
  www.mirantis.com
  www.mirantis.ru
  vkuk...@mirantis.com
 javascript:_e(%7B%7D,'cvml','vkuk...@mirantis.com');
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community

 On Mon, Jan 19, 2015 at 8:21 AM, Vladimir Kuklin 

Re: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled

2015-02-03 Thread Kevin Benton
If we have ports without IPs, I don't think we need a placeholder, do we?
Wouldn't a port without an IP address be the same thing as a port with a
placeholder indicating that it doesn't have an IP address?

On Tue, Feb 3, 2015 at 8:57 AM, John Belamaric jbelama...@infoblox.com
wrote:

  Hi Paddu,

  I think this is less an issue of the pluggable IPAM than it is the
 Neutron management layer, which requires an IP for a port, as far as I
 know. If the management layer is updated to allow a port to exist without a
 known IP, then an additional IP request type could be added to represent
 the placeholder you describing.

  However, doing so leaves IPAM out of the hands of Neutron and out of the
 hands of the external (presumably authoritative) IPAM system. This could
 lead to duplicate IP issues since each VM is deciding its own IP without
 any centralized coordination. I wouldn't recommend this approach to
 managing your IP space.

  John

   From: Padmanabhan Krishnan kpr...@yahoo.com
 Reply-To: Padmanabhan Krishnan kpr...@yahoo.com
 Date: Wednesday, January 28, 2015 at 4:58 PM
 To: John Belamaric jbelama...@infoblox.com, OpenStack Development
 Mailing List (not for usage questions) openstack-dev@lists.openstack.org
 
 Subject: Re: [openstack-dev] [Neutron] fixed ip info shown for port even
 when dhcp is disabled

Some follow up questions on this.

   In the specs, i see that during a create_port,  there's provisions to
 query the external source by  Pluggable IPAM to return the IP.
  This works fine for cases where the external source (say, DHCP server)
 can be queried for the IP address when a launch happens.

  Is there a provision to have the flexibility of a late IP assignment?

  I am thinking of cases, like the temporary unavailability of external IP
 source or lack of standard interfaces in which case data packet snooping is
 used to find the IP address of a VM after launch. Something similar to late
 binding of IP addresses.
  This means the create_port  may not get the IP address from the
 pluggable IPAM. In that case, launch of a VM (or create_port) shouldn't
 fail. The Pluggable IPAM should have some provision to return something
 equivalent to unavailable during create_port and be able to do an
 update_port when the IP address becomes available.

  I don't see that option. Please correct me if I am wrong.

  Thanks,
 Paddu


   On Thursday, December 18, 2014 7:59 AM, Padmanabhan Krishnan 
 kpr...@yahoo.com wrote:


   Hi John,
 Thanks for the pointers. I shall take a look and get back.

  Regards,
 Paddu


On Thursday, December 18, 2014 6:23 AM, John Belamaric 
 jbelama...@infoblox.com wrote:


   Hi Paddu,

  Take a look at what we are working on in Kilo [1] for external IPAM.
 While this does not address DHCP specifically, it does allow you to use an
 external source to allocate the IP that OpenStack uses, which may solve
 your problem.

  Another solution to your question is to invert the logic - you need to
 take the IP allocated by OpenStack and program the DHCP server to provide a
 fixed IP for that MAC.

  You may be interested in looking at this Etherpad [2] that Don Kehn put
 together gathering all the various DHCP blueprints and related info, and
 also at this BP [3] for including a DHCP relay so we can utilize external
 DHCP more easily.

  [1] https://blueprints.launchpad.net/neutron/+spec/neutron-ipam
  [2] https://etherpad.openstack.org/p/neutron-dhcp-org
 [3] https://blueprints.launchpad.net/neutron/+spec/dhcp-relay

  John

   From: Padmanabhan Krishnan kpr...@yahoo.com
 Reply-To: Padmanabhan Krishnan kpr...@yahoo.com, OpenStack Development
 Mailing List (not for usage questions) openstack-dev@lists.openstack.org
 
 Date: Wednesday, December 17, 2014 at 6:06 PM
 To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
 
 Subject: Re: [openstack-dev] [Neutron] fixed ip info shown for port even
 when dhcp is disabled

   This means whatever tools the operators are using, it need to make sure
 the IP address assigned inside the VM matches with Openstack has assigned
 to the port.
 Bringing the question that i had in another thread on the same topic:

  If one wants to use the provider DHCP server and not have Openstack's
 DHCP or L3 agent/DVR, it may not be possible to do so even with DHCP
 disabled in Openstack network. Even if the provider DHCP server is
 configured with the same start/end range in the same subnet, there's no
 guarantee that it will match with Openstack assigned IP address for bulk VM
 launches or  when there's a failure case.
 So, how does one deploy external DHCP with Openstack?

  If Openstack hasn't assigned a IP address when DHCP is disabled for a
 network, can't port_update be done with the provider DHCP specified IP
 address to put the anti-spoofing and security rules?
  With Openstack assigned IP address, port_update cannot be done since IP
 address aren't in sync and can overlap.

  Thanks,
 Paddu



 On 12/16/14 4:30 AM, Pasquale 

Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-03 Thread Jay Pipes

On 02/03/2015 06:54 PM, Kevin Benton wrote:

So do we just use whatever name we want instead? Can we use 'referrer'? ;-)


:) How about Content-Length?

-jay


On Tue, Feb 3, 2015 at 5:43 AM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On 02/02/2015 09:07 PM, Everett Toews wrote:

On Feb 2, 2015, at 7:24 PM, Sean Dague s...@dague.net
mailto:s...@dague.net
mailto:s...@dague.net mailto:s...@dague.net wrote:

On 02/02/2015 05:35 PM, Jay Pipes wrote:

On 01/29/2015 12:41 PM, Sean Dague wrote:

Correct. This actually came up at the Nova mid cycle
in a side
conversation with Ironic and Neutron folks.

HTTP error codes are not sufficiently granular to
describe what happens
when a REST service goes wrong, especially if it
goes wrong in a way
that would let the client do something other than
blindly try the same
request, or fail.

Having a standard json error payload would be really
nice.

{
  fault: ComputeFeatureUnsupportedOnIns__tanceType,
  messsage: This compute feature is not
supported on this kind of
instance type. If you need this feature please use a
different instance
type. See your cloud provider for options.
}

That would let us surface more specific errors.

snip


Standardization here from the API WG would be really
great.


What about having a separate HTTP header that indicates
the OpenStack
Error Code, along with a generated URI for finding more
information
about the error?

Something like:

X-OpenStack-Error-Code: 1234
X-OpenStack-Error-Help-URI:
http://errors.openstack.org/__1234
http://errors.openstack.org/1234

That way is completely backwards compatible (since we
wouldn't be
changing response payloads) and we could handle i18n
entirely via the
HTTP help service running on errors.openstack.org
http://errors.openstack.org
http://errors.openstack.org.


That could definitely be implemented in the short term, but
if we're
talking about API WG long term evolution, I'm not sure why a
standard
error payload body wouldn't be better.


Agreed. And using the “X-“ prefix in headers has been deprecated for
over 2 years now [1]. I don’t think we should be using it for
new things.

Everett

[1] https://tools.ietf.org/html/__rfc6648
https://tools.ietf.org/html/rfc6648


Ha! Good to know about the X- stuff :) Duly noted!


-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.__openstack.org?subject:__unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-02-03 Thread Kevin Benton
I definitely understand the use-case of having updatable stuff and I don't
intend to support any proposals to strip away that functionality. Brian was
suggesting was to block port IP changes since it depended on DHCP to
deliver that information to the hosts. I was just pointing out that we
would need to block any API operations that resulted in different
information being delivered via DHCP for that approach to make sense.

On Tue, Feb 3, 2015 at 5:01 PM, Robert Collins robe...@robertcollins.net
wrote:

 On 3 February 2015 at 00:48, Kevin Benton blak...@gmail.com wrote:
 The only thing this discussion has convinced me of is that allowing users
  to change the fixed IP address on a neutron port leads to a bad
  user-experience.
 ...

 Documenting a VM reboot is necessary, or even deprecating this (you won't
  like that) are sounding better to me by the minute.
 
  If this is an approach you really want to go with, then we should at
 least
  be consistent and deprecate the extra dhcp options extension (or at least
  the ability to update ports' dhcp options). Updating subnet attributes
 like
  gateway_ip, dns_nameserves, and host_routes should be thrown out as well.
  All of these things depend on the DHCP server to deliver updated
 information
  and are hindered by renewal times. Why discriminate against IP updates
 on a
  port? A failure to receive many of those other types of changes could
 result
  in just as severe of a connection disruption.

 So the reason we added the extra dhcp options extension was to support
 PXE booting physical machines for Nova baremetal, and then Ironic. It
 wasn't added for end users to use on the port, but as a generic way of
 supporting the specific PXE options needed - and that was done that
 way after discussing w/Neutron devs.

 We update ports for two reasons. Primarily, Ironic is HA and will move
 the TFTPd that boots are happening from if an Ironic node has failed.
 Secondly, because a non uncommon operation on physical machines is to
 replace broken NICs, and forcing a redeploy seemed unreasonable. The
 former case doesn't affect running nodes since its only consulted on
 reboot. The second case is by definition only possible when the NIC in
 question is offline (whether hotplug hardware or not).

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev