[openstack-dev] [TripleO] devtest_seed.sh erroring with a 500

2014-05-24 Thread Ricardo Carrillo Cruz
Hi guys

Being off some time from using tripleo and now I'm back in business.
I get this error during devtest_seed.sh run:

++ os-apply-config --key baremetal-network.seed.range-end --type raw
--key-default 192.0.2.20
+ BM_NETWORK_SEED_RANGE_END=192.0.2.20
+ setup-neutron 192.0.2.2 192.0.2.20 192.0.2.0/24 192.0.2.1 192.0.2.1
ctlplane
++ keystone tenant-get admin
++ awk '$2==id {print $4}'
+ nova quota-update --cores -1 --instances -1 --ram -1
4bd569caaf704cbcaa3610a49a362a50
+ '[' -z '' ']'
+ setup-baremetal --service-host seed --nodes /dev/fd/63
++ jq '[.nodes[0]]' /home/ricky/tripleo/testenv.json
ERROR: The server has either erred or is incapable of performing the
requested operation. (HTTP 500) (Request-ID:
req-81c5edab-c943-49fd-bc9f-96a96122be26)

Any pointers on how to troubleshoot this further?

Thanks and kind regards
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] devtest_seed.sh erroring with a 500

2014-05-24 Thread Ricardo Carrillo Cruz
Please disregard, I sent this email to the openstack-dev, instead of the
openstack users mailing list.

Sorry for the noise.


2014-05-24 13:08 GMT+02:00 Ricardo Carrillo Cruz 
ricardo.carrillo.c...@gmail.com:

 Hi guys

 Being off some time from using tripleo and now I'm back in business.
 I get this error during devtest_seed.sh run:

 ++ os-apply-config --key baremetal-network.seed.range-end --type raw
 --key-default 192.0.2.20
 + BM_NETWORK_SEED_RANGE_END=192.0.2.20
 + setup-neutron 192.0.2.2 192.0.2.20 192.0.2.0/24 192.0.2.1 192.0.2.1
 ctlplane
 ++ keystone tenant-get admin
 ++ awk '$2==id {print $4}'
 + nova quota-update --cores -1 --instances -1 --ram -1
 4bd569caaf704cbcaa3610a49a362a50
 + '[' -z '' ']'
 + setup-baremetal --service-host seed --nodes /dev/fd/63
 ++ jq '[.nodes[0]]' /home/ricky/tripleo/testenv.json
 ERROR: The server has either erred or is incapable of performing the
 requested operation. (HTTP 500) (Request-ID:
 req-81c5edab-c943-49fd-bc9f-96a96122be26)

 Any pointers on how to troubleshoot this further?

 Thanks and kind regards

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-24 Thread Robert Kukura


On 5/23/14, 10:54 PM, Armando M. wrote:

On 23 May 2014 12:31, Robert Kukura kuk...@noironetworks.com wrote:

On 5/23/14, 12:46 AM, Mandeep Dhami wrote:

Hi Armando:

Those are good points. I will let Bob Kukura chime in on the specifics of
how we intend to do that integration. But if what you see in the
prototype/PoC was our final design for integration with Neutron core, I
would be worried about that too. That specific part of the code
(events/notifications for DHCP) was done in that way just for the prototype
- to allow us to experiment with the part that was new and needed
experimentation, the APIs and the model.

That is the exact reason that we did not initially check the code to gerrit
- so that we do not confuse the review process with the prototype process.
But we were requested by other cores to check in even the prototype code as
WIP patches to allow for review of the API parts. That can unfortunately
create this very misunderstanding. For the review, I would recommend not the
WIP patches, as they contain the prototype parts as well, but just the final
patches that are not marked WIP. If you such issues in that part of the
code, please DO raise that as that would be code that we intend to upstream.

I believe Bob did discuss the specifics of this integration issue with you
at the summit, but like I said it is best if he represents that side
himself.

Armando and Mandeep,

Right, we do need a workable solution for the GBP driver to invoke neutron
API operations, and this came up at the summit.

We started out in the PoC directly calling the plugin, as is currently done
when creating ports for agents. But this is not sufficient because the DHCP
notifications, and I think the nova notifications, are needed for VM ports.
We also really should be generating the other notifications, enforcing
quotas, etc. for the neutron resources.

I am at loss here: if you say that you couldn't fit at the plugin
level, that is because it is the wrong level!! Sitting above it and
redo all the glue code around it to add DHCP notifications etc
continues the bad practice within the Neutron codebase where there is
not a good separation of concerns: for instance everything is cobbled
together like the DB and plugin logic. I appreciate that some design
decisions have been made in the past, but there's no good reason for a
nice new feature like GP to continue this bad practice; this is why I
feel strongly about the current approach being taken.
Armando, I am agreeing with you! The code you saw was a proof-of-concept 
implementation intended as a learning exercise, not something intended 
to be merged as-is to the neutron code base. The approach for invoking 
resources from the driver(s) will be revisited before the driver code is 
submitted for review.



We could just use python-neutronclient, but I think we'd prefer to avoid the
overhead. The neutron project already depends on python-neutronclient for
some tests, the debug facility, and the metaplugin, so in retrospect, we
could have easily used it in the PoC.

I am not sure I understand what overhead you mean here. Could you
clarify? Actually looking at the code, I see a mind boggling set of
interactions going back and forth between the GP plugin, the policy
driver manager, the mapping driver and the core plugin: they are all
entangled together. For instance, when creating an endpoint the GP
plugin ends up calling the mapping driver that in turns ends up calls
the GP plugin itself! If this is not overhead I don't know what is!
The way the code has been structured makes it very difficult to read,
let alone maintain and extend with other policy mappers. The ML2-like
nature of the approach taken might work well in the context of core
plugin, mechanisms drivers etc, but I would argue that it poorly
applies to the context of GP.
The overhead of using python-neutronclient is that unnecessary 
serialization/deserialization are performed as well as socket 
communication through the kernel. This is all required between 
processes, but not within a single process. A well-defined and efficient 
mechanism to invoke resource APIs within the process, with the same 
semantics as incoming REST calls, seems like a generally useful addition 
to neutron. I'm hopeful the core refactoring effort will provide this 
(and am willing to help make sure it does), but we need something we can 
use until that is available.


One lesson we learned from the PoC is that the implicit management of 
the GP resources (RDs and BDs) is completely independent from the 
mapping of GP resources to neutron resources. We discussed this at the 
last GP sub-team IRC meeting, and decided to package this functionality 
as a separate driver that is invoked prior to the mapping_driver, and 
can also be used in conjunction with other GP back-end drivers. I think 
this will help improve the structure and readability of the code, and it 
also shows the applicability of the ML2-like nature of the driver API.


You are 

Re: [openstack-dev] Election Stats and Review Discussion

2014-05-24 Thread Eoghan Glynn


 One of the things that OpenStack does really well is review our process
 for various activities. I don't believe we have had a discussion in the
 past reviewing our electoral process and I think it would be good to
 have this discussion.
 
 There are two issues that I feel need to be addressed:
 Item One: the health of our voter engagement for the tc election
 Item Two: messaging around campaigning
 others may feel additional issues need some airtime, if you can think of
 other issues affection our election process for ptl and/or tc elections,
 do add your item to the list.
 
 Item One:
 the health of our voter engagement for the tc election
 
 Going by voter turnout, the health of individual programs as reflected
 in the ptl voter turnout statistics is sound (43.5% voter turnout or
 better) however the health of the voter turnout for the tc election
 process could tolerate some scrutiny.
 
 For the April 2014 tc election the voter turnout was 29.7%. We have had
 a total of three tc elections in the history of OpenStack and our voter
 turnout percentage is dropping.
 First TC Election, March 2013: 33.6% voter turnout
 Second TC Election, October 2013: 30.9% voter turnout
 Third TC Election, April 2014: 29.7% voter turnout

IMO the single biggest factor dampening down interest in the TC
elections would be the staggered terms, such that only circa half
the seats are contested in each election.

This, I feel, undermines the sense of there being a potential for
an overall changing of the guard occurring on each cycle.

I appreciate that the current model is in place to ensure some degree
of continuity across cycles, but I suspect much of that continuity
would emerge naturally in any case ... i.e. if we were to change over
to an  all-seats-up-for-grabs model, I suspect the overall outcome
wouldn't actually be that different in terms of the TC composition.

However, it might make for a more open contest, and feel like less of
foregone conclusion ... possibly with the last few seats being contested
by candidates with less of a traditional openstack backgroud (e.g.
from one of the smaller projects on the edge of the ecosystem, or even
from the user/operator community).

And that might be just the ticket to reverse the perception of voter
apathy.

 Now our actual votes cast are increasing, but the size of the electorate
 is increasing faster, with proportionally fewer ATCs voting
 March 2013: 208 votes cast, 619 authorized voters
 October 2013: 342 votes cast, 1106 authorized voters
 April 2014: 448 votes cast, 1510 authorized voters
 
 I would like for there to be a discussion around voter engagement in the
 tc election.
 
 Item Two:
 messaging around campaigning
 
 Right now we have no messaging around election campaigning and I think
 we should have some.
 
 Specifically I feel we should state that the TC requires candidates and
 their backers to campaign in the spirit of the OpenStack ideals
 (Openness, Transparency, Commonality, Integration, Quality...)  as
 stated in the TC mission statement[footnote] while refraining from
 sending unsolicited email and also refraining from holding privately
 sponsored campaign events. Campaigning is expected to be done in the
 open with public access to links and content available to all.

Agree with the spirit of the above, but I do feel we should be able
to rely on a gentleperson's agreement between the candidates to
play fair and allow their individual record stand for itself.

Cheers,
Eoghan

 I welcome your participation in this discussion.
 
 Thank you,
 Anita.
 anteaya
 
 [footnote] The  Technical Committee (TC) is tasked with providing the
 technical  leadership for OpenStack as a whole (all official programs,
 as defined  below). It enforces OpenStack ideals (Openness,
 Transparency,  Commonality, Integration, Quality...), decides on issues
 affecting  multiple programs, forms an ultimate appeals board for
 technical  decisions, and generally has oversight over all the OpenStack
 project.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] telemetry-specs repo renamed as ceilometer-specs

2014-05-24 Thread Eoghan Glynn

Hi Ceilometer folks,

Just in case you weren't following the thread on *-specs repos
being renamed from {project}-specs - {program}-specs[1] ...

Note that there was a change of heart once the feedback from
the community indicated some concerns about the discoverability
of repos named for the less commonly used program names.

So the latest is that the naming will now follow the program
code-names, which for us means reverting to ceilometer-specs
from telemetry-specs.

The rename has been done, SergeyL helpfully put together an
etherpad[2] listing all the changes. Proposals in-flight on
telemetry-specs shouldn't be impacted, though before pushing
an updated patchset, best to clone the renamed repo and apply
the patch to the fresh clone, or otherwise just manually fix
up the remote reference in your .git/config.

Note however the gerrit project query[3] won't report any pre-
rename proposals, due to a quirk in the gerrit query engine.

Cheers,
Eoghan

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-May/035525.html
[2] https://etherpad.openstack.org/p/repo-renaming-2014-05-23
[3] https://review.openstack.org/#/q/project:openstack/ceilometer-specs,n,z

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Designate Incubation Request

2014-05-24 Thread Hayes, Graham

Hi all,

Designate would like to apply for incubation status in OpenStack.

Our application is here: 
https://wiki.openstack.org/wiki/Designate/Incubation_Application 

As part of our application we would like to apply for a new program. Our
application for the program is here:

https://wiki.openstack.org/wiki/Designate/Program_Application 

Designate is a DNS as a Service project, providing both end users,
developers, and administrators with an easy to use REST API to manage
their DNS Zones and Records.

Thanks,

Graham

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-05-24 Thread Armando M.
On 24 May 2014 05:20, Robert Kukura kuk...@noironetworks.com wrote:

 On 5/23/14, 10:54 PM, Armando M. wrote:

 On 23 May 2014 12:31, Robert Kukura kuk...@noironetworks.com wrote:

 On 5/23/14, 12:46 AM, Mandeep Dhami wrote:

 Hi Armando:

 Those are good points. I will let Bob Kukura chime in on the specifics of
 how we intend to do that integration. But if what you see in the
 prototype/PoC was our final design for integration with Neutron core, I
 would be worried about that too. That specific part of the code
 (events/notifications for DHCP) was done in that way just for the
 prototype
 - to allow us to experiment with the part that was new and needed
 experimentation, the APIs and the model.

 That is the exact reason that we did not initially check the code to
 gerrit
 - so that we do not confuse the review process with the prototype
 process.
 But we were requested by other cores to check in even the prototype code
 as
 WIP patches to allow for review of the API parts. That can unfortunately
 create this very misunderstanding. For the review, I would recommend not
 the
 WIP patches, as they contain the prototype parts as well, but just the
 final
 patches that are not marked WIP. If you such issues in that part of the
 code, please DO raise that as that would be code that we intend to
 upstream.

 I believe Bob did discuss the specifics of this integration issue with
 you
 at the summit, but like I said it is best if he represents that side
 himself.

 Armando and Mandeep,

 Right, we do need a workable solution for the GBP driver to invoke
 neutron
 API operations, and this came up at the summit.

 We started out in the PoC directly calling the plugin, as is currently
 done
 when creating ports for agents. But this is not sufficient because the
 DHCP
 notifications, and I think the nova notifications, are needed for VM
 ports.
 We also really should be generating the other notifications, enforcing
 quotas, etc. for the neutron resources.

 I am at loss here: if you say that you couldn't fit at the plugin
 level, that is because it is the wrong level!! Sitting above it and
 redo all the glue code around it to add DHCP notifications etc
 continues the bad practice within the Neutron codebase where there is
 not a good separation of concerns: for instance everything is cobbled
 together like the DB and plugin logic. I appreciate that some design
 decisions have been made in the past, but there's no good reason for a
 nice new feature like GP to continue this bad practice; this is why I
 feel strongly about the current approach being taken.

 Armando, I am agreeing with you! The code you saw was a proof-of-concept
 implementation intended as a learning exercise, not something intended to be
 merged as-is to the neutron code base. The approach for invoking resources
 from the driver(s) will be revisited before the driver code is submitted for
 review.


 We could just use python-neutronclient, but I think we'd prefer to avoid
 the
 overhead. The neutron project already depends on python-neutronclient for
 some tests, the debug facility, and the metaplugin, so in retrospect, we
 could have easily used it in the PoC.

 I am not sure I understand what overhead you mean here. Could you
 clarify? Actually looking at the code, I see a mind boggling set of
 interactions going back and forth between the GP plugin, the policy
 driver manager, the mapping driver and the core plugin: they are all
 entangled together. For instance, when creating an endpoint the GP
 plugin ends up calling the mapping driver that in turns ends up calls
 the GP plugin itself! If this is not overhead I don't know what is!
 The way the code has been structured makes it very difficult to read,
 let alone maintain and extend with other policy mappers. The ML2-like
 nature of the approach taken might work well in the context of core
 plugin, mechanisms drivers etc, but I would argue that it poorly
 applies to the context of GP.

 The overhead of using python-neutronclient is that unnecessary
 serialization/deserialization are performed as well as socket communication
 through the kernel. This is all required between processes, but not within a
 single process. A well-defined and efficient mechanism to invoke resource
 APIs within the process, with the same semantics as incoming REST calls,
 seems like a generally useful addition to neutron. I'm hopeful the core
 refactoring effort will provide this (and am willing to help make sure it
 does), but we need something we can use until that is available.


I appreciate that there is a cost involved in relying on distributed
communication, but this must be negligible considered what needs to
happen end-to-end. If the overhead being referred here is the price to
pay for having a more dependable system (e.g. because things can be
scaled out and/or made reliable independently), then I think this is a
price worth paying.

I do hope that the core refactoring is not aiming at what you're
suggesting, 

Re: [openstack-dev] [Neutron][FWaaS]Firewall Web Services Research Thesis Applicability to the OpenStack Project

2014-05-24 Thread Mike Grima
Mohammad,

My responses are inline:
Let's start from the question about Deny. There are no Deny actions. By
default there is no connectivity. If you want to establish that you do it
with Allow or other actions; otherwise no connectivity. Hence no need to
have Deny.

This makes sense. 

The policies generally apply to the whole group. The idea is to simplify
the use of contract and policy rules by applying them to a group of like
minded :) endpoints.
So you may reconsider how you group your endpoints into groups so you can
apply policies to groups of endpoints with similar characteristics/roles.

This makes sense.  Group-level policies should be applied to the entire
group.  So, am I correct in saying that policies can _only_ be applied to
entire groups, and not individual VM’s within a group? This makes the
assumption that each VM _does not_ have a unique group akin to
users on most Linux systems.  For example, you have a VM named
VM1.  VM1 is a member of one group, web servers. There is no unique
group named: VM1

The last post seemed to indicate that you can apply policies to specific
VM’s within a group.  

Lastly, what is the relationship between group policies and FWaaS?

Thank You,

Mike Grima, RHCE
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [IceHouse][Neutron][Ubuntu 14.04] Error: Failed to delete network

2014-05-24 Thread Martinx - ジェームズ
Guys,

I now how to reproduce this and I filled a BUG report about it, here:

https://bugs.launchpad.net/neutron/+bug/1322945

It seems that a regular user can breaks the Neutron L3 Router, by Adding a
Interface to its Namespace Router, that is attached to a IPv6 private
subnet, it also breaks an admin External Network... Seems to be serious
(I think).

Regards,
Thiago


On 23 May 2014 14:52, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Guys,

 I'm trying to delete a network in Neutron but it is failing, from Horizon
 it triggers the error message above (subject), and from CLI, it shows this:

 ---
 root@psuaa-1:~# neutron net-delete a1654832-8aac-42d5-8837-6d27b7421892
 Request Failed: internal server error while processing your request.
 ---

 The logs shows:

 ---
 == /var/log/neutron/server.log ==
 2014-05-21 11:49:54.242 5797 INFO neutron.wsgi [-] (5797) accepted
 ('2804:290:4:dead::10', 56908, 0, 0)

 2014-05-21 11:49:54.245 5797 INFO urllib3.connectionpool [-] Starting new
 HTTP connection (1): psuaa-1.mng.tcmc.com.br
 2014-05-21 11:49:54.332 5797 INFO neutron.wsgi
 [req-e1c4d6c4-71de-4bfa-a7db-f09fa0571377 None] 2804:290:4:dead::10 - -
 [21/May/2014 11:49:54] GET
 /v2.0/networks.json?fields=idid=a1654832-8aac-42d5-8837-6d27b7421892
 HTTP/1.1 200 251 0.089015

 2014-05-21 11:49:54.334 5797 INFO neutron.wsgi
 [req-e1c4d6c4-71de-4bfa-a7db-f09fa0571377 None] (5797) accepted
 ('2804:290:4:dead::10', 56910, 0, 0)

 2014-05-21 11:49:54.380 5797 ERROR neutron.api.v2.resource
 [req-f216416d-8433-444f-9108-f4a17f5bf49d None] delete failed
 2014-05-21 11:49:54.380 5797 TRACE neutron.api.v2.resource Traceback (most
 recent call last):
 2014-05-21 11:49:54.380 5797 TRACE neutron.api.v2.resource   File
 /usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py, line 87, in
 resource
 2014-05-21 11:49:54.380 5797 TRACE neutron.api.v2.resource result =
 method(request=request, **args)
 2014-05-21 11:49:54.380 5797 TRACE neutron.api.v2.resource   File
 /usr/lib/python2.7/dist-packages/neutron/api/v2/base.py, line 449, in
 delete
 2014-05-21 11:49:54.380 5797 TRACE neutron.api.v2.resource
 obj_deleter(request.context, id, **kwargs)
 2014-05-21 11:49:54.380 5797 TRACE neutron.api.v2.resource   File
 /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py, line 494,
 in delete_network
 2014-05-21 11:49:54.380 5797 TRACE neutron.api.v2.resource
 self.type_manager.release_segment(session, segment)
 2014-05-21 11:49:54.380 5797 TRACE neutron.api.v2.resource   File
 /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py, line
 101, in release_segment
 2014-05-21 11:49:54.380 5797 TRACE neutron.api.v2.resource
 driver.obj.release_segment(session, segment)
 2014-05-21 11:49:54.380 5797 TRACE neutron.api.v2.resource AttributeError:
 'NoneType' object has no attribute 'obj'
 2014-05-21 11:49:54.380 5797 TRACE neutron.api.v2.resource
 2014-05-21 11:49:54.383 5797 INFO neutron.wsgi
 [req-f216416d-8433-444f-9108-f4a17f5bf49d None] 2804:290:4:dead::10 - -
 [21/May/2014 11:49:54] DELETE
 /v2.0/networks/a1654832-8aac-42d5-8837-6d27b7421892.json HTTP/1.1 500 296
 0.048123
 ---

 What can I do to delete a net that doesn't want to be deleted? Do I
 just need to clean some tables directly on mysql, for example... ?

 NOTE: I'm double posting it here on dev list, because on user list no one
 seems to be able to help me... Sorry BTW...   :)

 Tks!
 Thiago

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev