Re: [openstack-dev] [storyboard] Prioritization?

2018-09-25 Thread CARVER, PAUL


Doug Hellmann   wrote:

>If we're just throwing data into it without trying to use it to communicate, 
>then I can see us having lots of different views of priority with
>the same level of "official-ness". I don't think that's what we're doing 
>though. I think we're trying to help teams track what they've 
>committed to do and *communicate* those commitments to folks outside of the 
>team. And from that perspective, the most important 
>definition of "priority" is the one attached by the person(s) doing the work. 
>That's not the same as saying no one else's opinion about 
>priority matters, but it does ultimately come down someone actually doing one 
>task before another. And I would like to be able to follow 
>along when those people prioritize work on the bugs I file.

I agree. Different people certainly may prioritize the same thing differently, 
but there are far more consumers of software than there are producers and the 
most important thing a consumer wants to know (about a feature that they're 
eagerly awaiting) is what is the priority of that feature to whoever is doing 
the work of implementing it.

There is certainly room for additional means of juggling and 
discussing/negotiating priorities in the stages before work really gets under 
way, but if it doesn't eventually become clear

1) who's doing the work
2) when are they targeting completion
3) what (if anything) is higher up on their todo list

then it's impossible for anyone else to make any sort of plans that depend on 
that work. Plans could include figuring out how to add more resources or 
contingency plans. It's also possible that people or projects may develop a 
reputation for not delivering on their stated top priorities, but that's at 
least better than having no idea what the priorities are because every person 
and project is making up their own system for tracking it.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection)

2018-06-11 Thread CARVER, PAUL
Matt Riedemann   wrote:

>The specs thing was mentioned last week in IRC when talking about blueprints 
>in launchpad and I just want to reiterate the specs are
>more about high level designs and reviewing those designs in Gerrit which was 
>/ is a major drawback in the 'whiteboard' in launchpad for 
>working on blueprints - old blueprints that had a design (if they had a design 
>at all) were usually linked from a wiki page.

>Anyway, specs are design documents per release. Blueprints in launchpad, at 
>least for nova, are the project management tracking tool for 
>that release. Not all blueprints require a spec, but all specs require a 
>blueprint since specs are generally for API changes or other major 
>design changes or features. Just FYI.

Matt is saying exactly what I've been saying in OpenContrail/Tungsten Fabric 
TSC meetings for a year. Launchpad Blueprints are very valuable for identifying 
what's likely to be in a given release, unambiguously indicating when the team 
has determined that something is going to miss a release (and therefore get 
bumped out to the future) and capturing the history of what was in a release. 
But they're lousy for reviewing and collaborating on technical details of what 
the thing actually is and how it is planned to work.

On the other hand, spec documents in Gerrit are pretty good for iteratively 
refining a design document and ultimately agreeing to a finalized version, but 
not really all that good at reflecting status and progress to people who are 
not down in the weeds of discussing the implementation details of the feature.

If Storyboard can find a way to improve on one or both of these activities, 
that's great. But abandoning Launchpad series and milestones functionality 
without a good replacement isn't a good idea for projects that are using them 
effectively. And for projects that aren't using them, I have to ask whether 
it's because they have a better way of communicating release plans to their 
user/operator communities or if it's because they simply aren't communicating 
release plans.

Generally somebody somewhere is paying for almost all development, so most 
likely somebody wants to know if and when it is/will-be/was done. The simpler 
and more consistent the tooling for communicating that, the less time everyone 
has to spend answering questions from the people who just want to know if 
whatever thing they're waiting on is in progress, on the backlog, or already 
complete.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection)

2018-06-11 Thread CARVER, PAUL
Jeremy Stanley  wrote:

>I'm just going to come out and call bullshit on this one. How many of the >800 
>official OpenStack deliverable repos have a view like that with any actual 
>relevant detail? If it's "standard" then certainly more than half, right?

Well, that's a bit rude, so I'm not going to get in a swearing contest over 
whether Nova, Neutron and Cinder are more "important" than 800+ other projects. 
I picked a handful of projects that I'm most interested in and which also 
happened to have really clear, accessible and easy to understand information on 
what they have delivered in the past and are planning to deliver in the future. 
If I slighted your favorite projects I apologize.

So, are you saying the information shown in the examples I gave is not useful?

Or just that I've been lucky in the past that the projects I'm most interested 
in do a better than typical job of managing releases but the future is all 
downhill?

If you're saying it's not useful info and we're better off without it then I'll 
just have to disagree. If you're saying that it has been replaced with 
something better, please share the URLs.

I'm all for improvements, but saying "only a few people were doing something 
useful so we should throw it out and nobody do it" isn't a path to improvement. 
How about we discuss alternate (e.g. better/easier/whatever) ways of making the 
information available.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection)

2018-06-11 Thread CARVER, PAUL
Doug Hellmann   wrote:


>I'm not sure what sort of project-specific documentation we think we need.
Perhaps none if there is a standard, but is there a standard? Can you give me 
examples in Storyboard of "standard" views that present information even 
vaguely similar to https://launchpad.net/nova/+series and 
https://launchpad.net/nova/rocky ? Or is every project on their own to invent 
the views that they will use, independent of what any other project is doing?

>Each project team can set up its own board or worklist for a given series. The 
>"documentation" just needs to point to that thing, right?
If we're relying on each team to set up its own board or worklist then it 
sounds like there is not a standard. In which case, we're back to the need for 
each project to document (at a minimum) where to find its view and perhaps also 
how to interpret it.

>Each team may also decide to use a set of tags, and those would need to be 
>documented, but that's no different from launchpad.
I agree, use of tags is likely to be team specific, but where can someone find 
those tags without mind-melding with an experienced member of the project?

E.g. If I navigate to the fairly obvious URL: 
https://bugs.launchpad.net/neutron I can see a list of tags on the right side 
of the page, sorted in descending order by frequency of use. On the other hand, 
if I follow the intuitive process of going to https://storyboard.openstack.org 
and clicking on "Project Groups" and then clicking "heat" and then clicking 
"openstack/heat" I reach the somewhat less obvious URL 
https://storyboard.openstack.org/#!/project/989 and no indication at all of 
what tags might be useful in this project.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection)

2018-06-11 Thread CARVER, PAUL
Jumping into the general Storyboard topic, but distinct from the previous 
questions about searching, is there any equivalent in Storyboard to the 
Launchpad series and milestones diagrams? e.g.:

https://launchpad.net/nova/+series
https://launchpad.net/neutron/+series
https://launchpad.net/cinder/+series
https://launchpad.net/networking-sfc/+series
https://launchpad.net/bgpvpn/+series

As I understand from what I've read and seen on summit talk recordings, anyone 
can create any view of the data they please and they can share their 
personalized view with whomever they want, but that is basically the complete 
opposite of standardization. Does Storyboard have any plans to provide any 
standard views that are consistent across projects? Or is it focused solely on 
the "in club" who know what dashboard views are custom to each project?

For anyone trying to follow multiple projects at a strategic level (i.e. not 
down in the weeds day to day, but checking in weekly or monthly) to see what's 
planned, what's deferred, and what's completed for either upcoming milestones 
or looking back to see if something did or did not get finished, a consistent 
cross-project UI of some kind is essential.

For example, with virtually no insider involvement with Nova, I was able to 
locate this view of what's going on for the Rocky series: 
https://launchpad.net/nova/rocky
How would I locate that same information for a project in Storyboard without 
constructing my own custom worklist or finding an insider to share their 
worklist with me?

-- 
Paul Carver
VoIP: 732-545-7377
Cell: 908-803-1656
E: pcar...@att.com
Q Instant Message
If you look closely enough at the present you can find loose bits of the future 
just lying around.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] summary of joint leadership meeting from 20 May

2018-06-04 Thread CARVER, PAUL
On Monday, June 04, 2018 18:47, Jay Pipes   wrote:

>Just my two cents, but the OpenStack and Linux foundations seem to be pumping 
>out new "open events" at a pretty regular clip -- >OpenStack Summit, OpenDev, 
>Open Networking Summit, OpenStack Days, OpenInfra Days, OpenNFV summit, the 
>list keeps >growing... at some point, do we think that the industry as a whole 
>is just going to get event overload?

Future tense? I think you could re-write "going to get event overload" into 
past tense and not be wrong.

We may be past the shoe event horizon.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][neutron-lib]Service function defintion files

2017-12-29 Thread CARVER, PAUL
I think it sort of was intentional, although probably not the primary focus. I 
don’t remember if it is a stadium requirement or merely a suggestion, but I 
believe it is strongly encouraged that “official” stadium sub-projects should 
follow neutron’s release cycle whereas “unofficial” projects are free to do 
whatever they want with regard to release cycle, just like with regard to API.

The definition of “stadium” is in some sense tautological. The main benefit of 
being in the stadium is that you tell someone you’re in the stadium they 
automatically know that there’s a set of assumptions that they can make about 
the project. The requirement for being in the stadium is that you do the 
necessary work to make those assumptions valid.

If the developers don’t care whether people can validly make those assumptions, 
there’s no pressure on them to be in the stadium. If the users don’t care about 
those assumptions, there’s no reason why they should prefer stadium projects 
over non-stadium projects. It’s essentially just a label that declares that a 
specific set of requirements have been met. It’s up to each individual to 
evaluate whether they care about that specific set of requirements.

--
Paul Carver
VoIP: 732-545-7377
Cell: 908-803-1656
E: pcar...@att.com<mailto:pcar...@att.com>
Q Instant Message
It is difficult to make predictions. Especially about the future.


From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Friday, December 29, 2017 14:00
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][neutron-lib]Service function defintion 
files

On 28 December 2017 at 06:57, CARVER, PAUL 
<pc2...@att.com<mailto:pc2...@att.com>> wrote:
It was a gating criteria for stadium status. The idea was that the for a 
stadium project the neutron team would have review authority over the API but 
wouldn't necessarily review or be overly familiar with the implementation.

A project that didn't have it's API definition in neutron-lib could do anything 
it wanted with its API and wouldn't be a neutron subproject because the neutron 
team wouldn't necessarily know anything at all about it.

For a neutron subproject there would at least theoretically be members of the 
neutron team who are familiar with the API and who ensure some sort of 
consistency across APIs of all neutron subprojects.

This is also a gating criteria for publishing API documentation on 
api.openstack.org<https://urldefense.proofpoint.com/v2/url?u=http-3A__api.openstack.org=DwMFaQ=LFYZ-o9_HUMeMTSQicvjIg=HBNonG828PGilNRNwXAtdg=GqHaQCoy3Tvyg8H0NL9cSBDP_CC89OcfRNL28Q9AZcI=Fsy6AOROmR5biRFONfOt4pP30zz-pz44mnTdHv_hkxc=>
 vs publishing somewhere else. Again, the idea being that the neutron team 
would be able, at least in some sense, to "vouch for" the OpenStack networking 
APIs, but only for "official" neutron stadium subprojects.

Projects that don't meet the stadium criteria, including having api-def in 
neutron-lib, are "anything goes" and not part of neutron because no one from 
the neutron team is assumed to know anything about them. They may work just 
fine, it's just that you can't assume that anyone from neutron has anything to 
do with them or even knows what they do.

OK - that makes logical sense, though it does seem that it would tie specific 
versions of every service in that list to a common version of neutron-lib as a 
byproduct, so it would be impossible to upgrade LBaaS without also potentially 
having to upgrade bgpvpn, for instance.  I don't know if that was the 
intention, but I wouldn't have expected it.
--
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][neutron-lib]Service function defintion files

2017-12-28 Thread CARVER, PAUL
It was a gating criteria for stadium status. The idea was that the for a 
stadium project the neutron team would have review authority over the API but 
wouldn't necessarily review or be overly familiar with the implementation.

A project that didn't have it's API definition in neutron-lib could do anything 
it wanted with its API and wouldn't be a neutron subproject because the neutron 
team wouldn't necessarily know anything at all about it.

For a neutron subproject there would at least theoretically be members of the 
neutron team who are familiar with the API and who ensure some sort of 
consistency across APIs of all neutron subprojects.

This is also a gating criteria for publishing API documentation on 
api.openstack.org vs publishing somewhere else. Again, the idea being that the 
neutron team would be able, at least in some sense, to "vouch for" the 
OpenStack networking APIs, but only for "official" neutron stadium subprojects.

Projects that don't meet the stadium criteria, including having api-def in 
neutron-lib, are "anything goes" and not part of neutron because no one from 
the neutron team is assumed to know anything about them. They may work just 
fine, it's just that you can't assume that anyone from neutron has anything to 
do with them or even knows what they do.



--
Paul Carver
V: 732.545.7377
C: 908.803.1656



 Original message 
From: Ian Wells 
Date: 12/27/17 21:57 (GMT-05:00)
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [neutron][neutron-lib]Service function defintion files

Hey,

Can someone explain how the API definition files for several service plugins 
ended up in neutron-lib?  I can see that they've been moved there from the 
plugins themselves (e.g. networking-bgpvpn has 
https://github.com/openstack/neutron-lib/commit/3d3ab8009cf435d946e206849e85d4bc9d149474#diff-11482323575c6bd25b742c3b6ba2bf17)
 and that there's a stadium element to it judging by some earlier commits on 
the same directory, but I don't understand the reasoning why such service 
plugins wouldn't be self-contained - perhaps someone knows the history?

Thanks,
--
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron exception when creating a network using Contrail R4.1 and OpenStack Ocata

2017-12-06 Thread CARVER, PAUL
Anda,

Will you be able to join the OpenContrail summit today? The Zoom link is below 
if you weren't able to make advance plans to join us in person in Austin. 
Hopefully the presentations will be helpful and there will be time for 
discussion directly related to this topic.

https://zoom.us/j/516126818
International numbers available: 
https://zoom.us/zoomconference?m=Ey5u5mAqUMK2XZED55tVEzbxDueqU9Wc

https://www.eventbrite.com/e/opencontrail-user-and-developer-group-kubecon-austin-2017-tickets-40200106601
http://www.opencontrail.org/event/opencontrail-kubecon-austin/


--
Paul Carver
VoIP: 732-545-7377
Cell: 908-803-1656
E: pcar...@att.com
Q Instant Message
It is difficult to make predictions. Especially about the future.


From: Anda Nicolae [mailto:anico...@lenovo.com]
Sent: Monday, December 04, 2017 12:08
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Neutron exception when creating a network using 
Contrail R4.1 and OpenStack Ocata

Hi all,

I am struggling with Contrail R4.1 and OpenStack Ocata on Ubuntu 16.04.2.
I managed to install Contrail R4.1 on Ubuntu 16.04 using contrail-installer 
package and to successfully start all Contrail processes.
Then, I installed OpenStack Ocata using devstack. Unfortunately, I am facing 
some errors.

Worth mentioning that I have also installed only OpenStack Ocata without 
Contrail on Ubuntu 16.04 and it worked without any errors.

Some of the errors I have encountered with OpenStack Ocata and Contrail R4.1 
are:
If I issue: 'neutron net-list', works fine but 'neutron --debug net-create 
net1' returns:

DEBUG: keystoneauth.session GET call to None for http:// 
:5000/v2.0 used request id 
req-ee38087e-3167-4e2f-a3c7-05422984e40d
DEBUG: keystoneauth.identity.v2 Making authentication request to http:// 
/identity/v2.0/tokens
DEBUG: keystoneauth.session REQ: curl -g -i -X POST http:// 
:9696/v2.0/networks.json -H "User-Agent: 
python-neutronclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}07ab629ceba1f5e5eeb0533eccee9dad135910f9" -d '{"network": {"name": 
"net1", "admin_state_up": true}}'
DEBUG: keystoneauth.session RESP: [404] Content-Type: application/json 
Content-Length: 97 X-Openstack-Request-Id: 
req-ea656a0f-f621-43c7-af48-eaff7fd8d97a Date: Tue, 28 Nov 2017 00:52:14 GMT 
Connection: keep-alive
RESP BODY: {"NeutronError": {"message": "An unknown exception occurred.", 
"type": "NotFound", "detail": ""}}

DEBUG: keystoneauth.session POST call to network for 
http://:9696/v2.0/networks.json
 used request id req-ea656a0f-f621-43c7-af48-eaff7fd8d97a
DEBUG: neutronclient.v2_0.client Error message: {"NeutronError": {"message": 
"An unknown exception occurred.", "type": "NotFound", "detail": ""}}
DEBUG: neutronclient.v2_0.client POST call to neutron for http:// 
:9696/v2.0/networks.json used request id 
req-ea656a0f-f621-43c7-af48-eaff7fd8d97a
ERROR: neutronclient.shell An unknown exception occurred.
Neutron server returns request_ids: ['req-ea656a0f-f621-43c7-af48-eaff7fd8d97a']
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
877, in run_subcommand
return run_command(cmd, cmd_parser, sub_argv)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
114, in run_command
return cmd.run(known_args)
  File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py",
 line 324, in run
return super(NeutronCommand, self).run(parsed_args)
  File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 112, in 
run
column_names, data = self.take_action(parsed_args)
  File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py",
 line 406, in take_action
data = obj_creator(body)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 798, in create_network
return self.post(self.networks_path, body=body)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 366, in post
headers=headers, params=params)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 301, in do_request
self._handle_fault_response(status_code, replybody, resp)
 File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 276, in _handle_fault_response
exception_handler_v20(status_code, error_body)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 92, in exception_handler_v20
request_ids=request_ids)
NotFound: An unknown exception occurred.

>From q-svc logs, I've found:
ERROR neutron_plugin_contrail.plugins.opencontrail.contrail_plugin_base 

Re: [Openstack] Neutron Service Chains with Linux Bridge?

2016-11-14 Thread CARVER, PAUL
Michael Gale [mailto:gale.mich...@gmail.com] wrote:

>Does anyone know if the work for Neutron Service Chains supports environments 
>built with Linux Bridge as the Neutron ML2 driver?


I don’t think it’s possible. I’m not aware of any document that says Linux 
Bridge doesn’t support modifications to its forwarding tables, but I think 
that’s for the same reason that it’s unlikely that a car’s owner’s manual is 
unlikely to mention that you can’t seal the doors and use it for deep sea 
exploration. It’s not at all a use case the designers expected.

Service chaining is all about manipulating the forwarding tables in order to 
override the normal “forward via most direct path to destination” behavior. It 
relies on the dataplane having a standard, documented and designed/intended 
mechanism for manipulating the forwarding tables in arbitrary (or at least 
fairly flexible) ways. I don’t believe Linux Bridge was designed with any 
intention to allow external software to manipulate its forwarding behavior on a 
per packet/per destination basis.

OvS and several other dataplanes are explicitly designed with the expectation 
and interface for an external controller to manipulate the forwarding in rich 
and flexible ways.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] cisco trunk interface in err-disabled

2016-09-20 Thread CARVER, PAUL

Satish Patel [mailto:satish@gmail.com] wrote:

>How do I stop that loop at ovs? Any guideline if I am having issue then 
>someone else must having same issue. 

Neutron uses several bridges in OvS and programs them with flows. In addition 
there are several ways to configure how your compute node connects to the 
physical network. A loop can occur when bridges are connected together in 
multiple places. You may find EasyOVS (https://github.com/yeasy/easyOVS) 
helpful in investigating how your OvS is configured.

You've probably got a misconfiguration where you have something connected where 
it shouldn't be. If you can delete all VMs (to minimize the number of things 
connected to OvS) then just use EasyOVS or the standard OvS commands (e.g. 
ovs-vsctl show) to gather information on bridges and ports. Then just draw it 
out on paper. Look at how the bridges are connected together and how your 
physical NICs (do you have more than one NIC connected to your 2960?) are 
connected to the bridge(s) and you will likely see a loop somewhere.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] password in clear text

2016-03-23 Thread CARVER, PAUL
Jagga Soorma wrote:

>Currently when using the openstack api I have to save my password in clear 
>text in
>the OS_PASSWORD environment variable.  Is there a more secure way to use the
>openstack api without having to either store this password in clear text or 
>enter the
>password manually every time I run a openstack command?  Is there some way that
>I can use a token id?  I have tried but can't seem to get it to work and not 
>sure what
>else is possible. 

If the token will allow you to use services and you store the token in clear 
text then
you’ve only managed to rename your password to token without adding any 
security.

What you need to think about is what are you willing to type and when are you 
willing
to type it. I don’t know if anyone has a polished “official” implementation, 
but a couple
of options:

1) Configure one of your login scripts to prompt for your OpenStack password and
export it rather than putting it directly in a login script.

2) Encrypt your home directory and store your "clear text" password in a file 
in your
 encrypted home directory

3) Put your password in a file on a USB flash drive (in an encrypted file if 
you want
 a double layer of security) and create a wrapper script that reads you 
password
 from a fixed location on USB drive when you run a command. (keep the USB 
drive
 in a physical safe when not in use)


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [Neutron] (RE: Change in openstack/neutron-specs[master]: Introducing Tap-as-a-Service)

2015-02-24 Thread CARVER, PAUL
Maybe I'm misreading review.o.o, but I don't see the -2. There was a -2 from 
Salvatore Orlando with the comment The -2 on this patch is only to deter 
further comments and a link to 140292, but 140292 has a comment from Kyle 
saying it's been abandoned in favor of going back to 96149. Are we in a loop 
here?

We're moving forward internally with proprietary mechanisms for attaching 
analyzers but it sure would be nice if there were a standard API. Anybody who 
thinks switches don't need SPAN/mirror ports has probably never working in 
Operations on a real production network where SLAs were taken seriously and 
enforced.

I know there's been a lot of heated discussion around this spec for a variety 
of reasons, but there isn't an enterprise class hardware switch on the market 
that doesn't support SPAN/mirror. Lack of this capability is a glaring omission 
in Neutron that keeps Operations type folks opposed to using it because it 
causes them to lose visibility that they've had for ages. We're getting a lot 
of pressure to continue deploying hardware analyzers and/or deploy 
non-OpenStack mechanisms for implementing tap/SPAN/mirror capability when I'd 
much rather integrate the analyzers into OpenStack.


-Original Message-
From: Kyle Mestery (Code Review) [mailto:rev...@openstack.org] 
Sent: Tuesday, February 24, 2015 17:37
To: vinay yadhav
Cc: CARVER, PAUL; Marios Andreou; Sumit Naiksatam; Anil Rao; Carlos Gonçalves; 
YAMAMOTO Takashi; Ryan Moats; Pino de Candia; Isaku Yamahata; Tomoe Sugihara; 
Stephen Wong; Kanzhe Jiang; Bao Wang; Bob Melander; Salvatore Orlando; Armando 
Migliaccio; Mohammad Banikazemi; mark mcclain; Henry Gessau; Adrian Hoban; 
Hareesh Puthalath; Subrahmanyam Ongole; Fawad Khaliq; Baohua Yang; Maruti 
Kamat; Stefano Maffulli 'reed'; Akihiro Motoki; ijw-ubuntu; Stephen Gordon; 
Rudrajit Tapadar; Alan Kavanagh; Zoltán Lajos Kis
Subject: Change in openstack/neutron-specs[master]: Introducing Tap-as-a-Service

Kyle Mestery has abandoned this change.

Change subject: Introducing Tap-as-a-Service
..


Abandoned

This review is  4 weeks without comment and currently blocked by a core 
reviewer with a -2. We are abandoning this for now. Feel free to reactivate the 
review by pressing the restore button and contacting the reviewer with the -2 
on this review to ensure you address their concerns.

-- 
To view, visit https://review.openstack.org/96149
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: abandon
Gerrit-Change-Id: I087d9d2a802ea39c02259f17d2b8c4e2f6d8d714
Gerrit-PatchSet: 8
Gerrit-Project: openstack/neutron-specs
Gerrit-Branch: master
Gerrit-Owner: vinay yadhav vinay.yad...@ericsson.com
Gerrit-Reviewer: Adrian Hoban adrian.ho...@intel.com
Gerrit-Reviewer: Akihiro Motoki amot...@gmail.com
Gerrit-Reviewer: Alan Kavanagh alan.kavan...@ericsson.com
Gerrit-Reviewer: Anil Rao arao...@gmail.com
Gerrit-Reviewer: Armando Migliaccio arma...@gmail.com
Gerrit-Reviewer: Bao Wang baowan...@yahoo.com
Gerrit-Reviewer: Baohua Yang bao...@linux.vnet.ibm.com
Gerrit-Reviewer: Bob Melander bob.melan...@gmail.com
Gerrit-Reviewer: Carlos Gonçalves m...@cgoncalves.pt
Gerrit-Reviewer: Fawad Khaliq fa...@plumgrid.com
Gerrit-Reviewer: Hareesh Puthalath hareesh.puthal...@gmail.com
Gerrit-Reviewer: Henry Gessau ges...@cisco.com
Gerrit-Reviewer: Isaku Yamahata yamahata.rev...@gmail.com
Gerrit-Reviewer: Jenkins
Gerrit-Reviewer: Kanzhe Jiang kan...@gmail.com
Gerrit-Reviewer: Kyle Mestery mest...@mestery.com
Gerrit-Reviewer: Marios Andreou mar...@redhat.com
Gerrit-Reviewer: Maruti Kamat maruti.ka...@hp.com
Gerrit-Reviewer: Mohammad Banikazemi m...@us.ibm.com
Gerrit-Reviewer: Paul Carver pcar...@att.com
Gerrit-Reviewer: Pino de Candia gdecan...@midokura.com
Gerrit-Reviewer: Rudrajit Tapadar rudrajit.tapa...@gmail.com
Gerrit-Reviewer: Ryan Moats rmo...@us.ibm.com
Gerrit-Reviewer: Salvatore Orlando salv.orla...@gmail.com
Gerrit-Reviewer: Stefano Maffulli 'reed' stef...@openstack.org
Gerrit-Reviewer: Stephen Gordon sgor...@redhat.com
Gerrit-Reviewer: Stephen Wong stephen.kf.w...@gmail.com
Gerrit-Reviewer: Subrahmanyam Ongole song...@oneconvergence.com
Gerrit-Reviewer: Sumit Naiksatam sumitnaiksa...@gmail.com
Gerrit-Reviewer: Tomoe Sugihara to...@midokura.com
Gerrit-Reviewer: Welcome, new contributor!
Gerrit-Reviewer: YAMAMOTO Takashi yamam...@valinux.co.jp
Gerrit-Reviewer: Zoltán Lajos Kis zoltan.lajos@ericsson.com
Gerrit-Reviewer: ijw-ubuntu iawe...@cisco.com
Gerrit-Reviewer: mark mcclain m...@mcclain.xyz
Gerrit-Reviewer: vinay yadhav vinay.yad...@ericsson.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Networking documentation

2014-11-13 Thread CARVER, PAUL
If anyone knows where this page 
http://docs.openstack.org/havana/config-reference/content/under_the_hood_openvswitch.html
went in the Juno documentation please let me know.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Question about VXLAN support

2014-09-18 Thread CARVER, PAUL
I haven’t done this on a VXLAN deployment but I did a packet capture on a 
Grizzly GRE environment to see exactly what it looks like on the wire and the 
results are below (although the color coding will probably be lost.) What we 
have with GRE is 802.1q inside the encapsulated payload so if the 4096 limit is 
a concern then this doesn’t alleviate it.

What I’m not sure of is whether the VLAN ID inside of the GRE payload is 
globally unique or whether different compute nodes can reuse the same VLAN ID 
for different Neutron networks.

If somebody has a VXLAN environment up and running I would love to see a packet 
capture similar to the one below (hint, hint :-))

Frame 1: 136 bytes on wire (1088 bits), 136 bytes captured (1088 bits)
Ethernet II, Src: e8:9a:8f:23:41:8f (e8:9a:8f:23:41:8f), Dst: e8:9a:8f:23:42:8d 
(e8:9a:8f:23:42:8d)
Internet Protocol Version 4, Src: 192.168.160.13 (192.168.160.13), Dst: 
192.168.160.21 (192.168.160.21)
Generic Routing Encapsulation (Transparent Ethernet bridging)
Ethernet II, Src: fa:16:3e:69:42:0d (fa:16:3e:69:42:0d), Dst: fa:16:3e:0c:bc:fa 
(fa:16:3e:0c:bc:fa)
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 1
Internet Protocol Version 4, Src: 192.168.20.3 (192.168.20.3), Dst: 
12.129.192.149 (12.129.192.149)
User Datagram Protocol, Src Port: 123 (123), Dst Port: 123 (123)
Network Time Protocol (NTP Version 4, client)

Frame 2: 88 bytes on wire (704 bits), 88 bytes captured (704 bits)
Ethernet II, Src: e8:9a:8f:23:41:8f (e8:9a:8f:23:41:8f), Dst: e8:9a:8f:23:41:f1 
(e8:9a:8f:23:41:f1)
Internet Protocol Version 4, Src: 192.168.160.13 (192.168.160.13), Dst: 
192.168.160.14 (192.168.160.14)
Generic Routing Encapsulation (Transparent Ethernet bridging)
Ethernet II, Src: fa:16:3e:35:e3:4f (fa:16:3e:35:e3:4f), Dst: Broadcast 
(ff:ff:ff:ff:ff:ff)
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 1
Address Resolution Protocol (request)

Frame 3: 88 bytes on wire (704 bits), 88 bytes captured (704 bits)
Ethernet II, Src: e8:9a:8f:23:41:8f (e8:9a:8f:23:41:8f), Dst: e8:9a:8f:23:42:8d 
(e8:9a:8f:23:42:8d)
Internet Protocol Version 4, Src: 192.168.160.13 (192.168.160.13), Dst: 
192.168.160.21 (192.168.160.21)
Generic Routing Encapsulation (Transparent Ethernet bridging)
Ethernet II, Src: fa:16:3e:35:e3:4f (fa:16:3e:35:e3:4f), Dst: Broadcast 
(ff:ff:ff:ff:ff:ff)
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 1
Address Resolution Protocol (request)

Frame 4: 88 bytes on wire (704 bits), 88 bytes captured (704 bits)
Ethernet II, Src: e8:9a:8f:23:42:8d (e8:9a:8f:23:42:8d), Dst: e8:9a:8f:23:41:8f 
(e8:9a:8f:23:41:8f)
Internet Protocol Version 4, Src: 192.168.160.21 (192.168.160.21), Dst: 
192.168.160.13 (192.168.160.13)
Generic Routing Encapsulation (Transparent Ethernet bridging)
Ethernet II, Src: fa:16:3e:b5:19:db (fa:16:3e:b5:19:db), Dst: fa:16:3e:35:e3:4f 
(fa:16:3e:35:e3:4f)
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 5
Address Resolution Protocol (reply)

Frame 5: 388 bytes on wire (3104 bits), 388 bytes captured (3104 bits)
Ethernet II, Src: e8:9a:8f:23:41:8f (e8:9a:8f:23:41:8f), Dst: e8:9a:8f:23:42:8d 
(e8:9a:8f:23:42:8d)
Internet Protocol Version 4, Src: 192.168.160.13 (192.168.160.13), Dst: 
192.168.160.21 (192.168.160.21)
Generic Routing Encapsulation (Transparent Ethernet bridging)
Ethernet II, Src: fa:16:3e:35:e3:4f (fa:16:3e:35:e3:4f), Dst: fa:16:3e:b5:19:db 
(fa:16:3e:b5:19:db)
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 1
Internet Protocol Version 4, Src: 192.168.20.10 (192.168.20.10), Dst: 
192.168.20.2 (192.168.20.2)
User Datagram Protocol, Src Port: 68 (68), Dst Port: 67 (67)
Bootstrap Protocol (Request)



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Neutron][DevStack] How to increase developer usage of Neutron

2014-08-14 Thread CARVER, PAUL
Mike Spreitzer [mailto:mspre...@us.ibm.com] wrote:

I'll bet I am not the only developer who is not highly competent with
bridges and tunnels, Open VSwitch, Neutron configuration, and how DevStack
transmutes all those.  My bet is that you would have more developers using
Neutron if there were an easy-to-find and easy-to-follow recipe to use, to
create a developer install of OpenStack with Neutron.  One that's a pretty
basic and easy case.  Let's say a developer gets a recent image of Ubuntu
14.04 from Canonical, and creates an instance in some undercloud, and that
instance has just one NIC, at 10.9.8.7/16.  If there were a recipe for
such a developer to follow from that point on, it would be great.

https://wiki.openstack.org/wiki/NeutronDevstack worked for me.

However, I'm pretty sure it's only a single node all in one setup. At least,
I created only one VM to run it on and I don't think DevStack has created
multiple nested VMs inside of the one I create to run DevStack. I haven't
gotten around to figuring out how to setup a full multi-node DevStack
setup with separate compute nodes and network nodes and GRE/VXLAN tunnels.

There are multi-node instructions on that wiki page but I haven't tried
following them. If someone has a Vagrant file that creates a full multi-
node Neutron devstack complete with GRE/VXLAN tunnels it would be great
if they could add it to that wiki page.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread CARVER, PAUL
Daniel P. Berrange [mailto:berra...@redhat.com] wrote:

Depending on the usage needs, I think Google hangouts is a quite useful
technology. For many-to-many session its limit of 10 participants can be
an issue, but for a few-to-many broadcast it could be practical. What I
find particularly appealing is the way it can live stream the session
over youtube which allows for unlimited number of viewers, as well as
being available offline for later catchup.

I can't actually offer ATT resources without getting some level of
management approval first, but just for the sake of discussion here's
some info about the telepresence system we use.

-=-=-=-=-=-=-=-=-=-
ATS B2B Telepresence conferences can be conducted with an external company's
Telepresence room(s), which subscribe to the ATT Telepresence Solution,
or a limited number of other Telepresence service provider's networks.

Currently, the number of Telepresence rooms that can participate in a B2B
conference is limited to a combined total of 20 rooms (19 of which can be
ATT rooms, depending on the number of remote endpoints included).
-=-=-=-=-=-=-=-=-=-

We currently have B2B interconnect with over 100 companies and ATT has
telepresence rooms in many of our locations around the US and around
the world. If other large OpenStack companies also have telepresence
rooms that we could interconnect with I think it might be possible
to get management agreement to hold a couple OpenStack meetups per
year.

Most of our rooms are best suited for 6 people, but I know of at least
one 18 person telepresence room near me.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread CARVER, PAUL
Russell Bryant [mailto:rbry...@redhat.com] wrote:

An ideal solution would allow attendees to join as individuals from
anywhere.  A lot of contributors work from home.  Is that sort of thing
compatible with your system?

In principle, yes, but that loses the immersive telepresence aspect
which is the next best thing to an in-person meetup (which is where
this thread started.)

ATT Employees live and breathe on ATT Connect which is our
teleconferencing (not telepresence) service. It supports webcam
video as well as desktop sharing, but I'm on the verge of making
a sales pitch here which was NOT my intent.

I'm on ATT Connect meetings 5+ times a day but I'm biased so I
won't offer any opinion on how it compares to WebEx, GotoMeeting,
and other services. None of them are really equivalent to the
purpose built telepresence rooms.

My point was that there may well be a telepresence room within
reasonable driving distance for a large number of OpenStack
contributors if we were able to get a number of the large
OpenStack participant companies to open their doors to an
occasional meet-up. Instead of asking participants around
the globe to converge on a single physical location for
a meet-up, perhaps they could converge on the closest of
20 different locations that are linked via telepresence.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-13 Thread CARVER, PAUL
Daniel P. Berrange [mailto:berra...@redhat.com] wrote:

our dispersed contributor base. I think that we should be examining
what we can achieve with some kind of virtual online mid-cycle meetups
instead. Using technology like google hangouts or some similar live
collaboration technology, not merely an IRC discussion. Pick a 2-3
day period, schedule formal agendas / talking slots as you would with
a physical summit and so on. I feel this would be more inclusive to
our community as a whole, avoid excessive travel costs, so allowing
more of our community to attend the bigger design summits. It would
even open possibility of having multiple meetups during a cycle (eg
could arrange mini virtual events around each milestone if we wanted)

How about arranging some high quality telepresence rooms? A number of
the big companies associated with OpenStack either make or own some
pretty nice systems. Perhaps it could be negotiated for some of these
companies to open their doors to allow OpenStack developers for some
scheduled events.

With some scheduling and coordination effort it would probably be
possible to setup a bunch of local meet-up points interconnected
by telepresence links.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-11 Thread CARVER, PAUL
loy wolfe [mailto:loywo...@gmail.com] wrote:

Then since Network/Subnet/Port will never be treated just as LEGACY
COMPATIBLE role, there is no need to extend Nova-Neutron interface to
follow the GBP resource. Anyway, one of optional service plugins inside
Neutron shouldn't has any impact on Nova side.

This gets to the root of why I was getting confused about Jay and others
having Nova related concerns. I was/am assuming that GBP is simply an
*additional* mechanism for manipulating Neutron, not a deprecation of any
part of the existing Neutron API. I think Jay's concern and the reason
why he keeps mentioning Nova as the biggest and most important consumer
of Neutron's API stems from an assumption that Nova would need to change
to use the GBP API.

If I've understood the follow on discussions correctly, there's no need for
Nova to use the GBP API at all until/unless the Nova developers see benefit
in it because they can continue to accomplish everything with the existing
API. The GBP API simply provides a more application centric rather than
network centric representation of the same thing.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Is network ordering of vNICs guaranteed?

2014-08-11 Thread CARVER, PAUL
Armando M. [mailto:arma...@gmail.com] wrote:

On 9 August 2014 10:16, Jay Pipes jaypi...@gmail.com wrote:
Paul, does this friend of a friend have a reproduceable test
script for this?

We would also need to know the OpenStack release where this issue manifest
itself. A number of bugs have been raised in the past around this type of
issue, and the last fix I recall is this one:

https://bugs.launchpad.net/nova/+bug/1300325

It's possible that this might have regressed, though.

The reason I called it friend of a friend is because I think the info
has filtered through a series of people and is not firsthand observation.
I'll ask them to track back to who actually observed the behavior, how
long ago, and with what version.

It could be a regression, or it could just be old info that people have
continued to assume is true without realizing it was considered a bug
all along and has been fixed.

Thanks! The moment I first heard it my first reaction was that it was
almost certainly a bug and had probably already been fixed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread CARVER, PAUL
Wuhongning [mailto:wuhongn...@huawei.com] wrote:

Does it make sense to move all advanced extension out of ML2, like security
group, qos...? Then we can just talk about advanced service itself, without
bothering basic neutron object (network/subnet/port)

A modular layer 3 (ML3) analogous to ML2 sounds like a good idea. I still
think it's too late in the game to be shooting down all the work that the
GBP team has put in unless there's a really clean and effective way of
running AND iterating on GBP in conjunction with Neutron without being
part of the Juno release. As far as I can tell they've worked really
hard to follow the process and accommodate input. They shouldn't have
to wait multiple more releases on a hypothetical refactoring of how L3+ vs
L2 is structured.

But, just so I'm not making a horrible mistake, can someone reassure me
that GBP isn't removing the constructs of network/subnet/port from Neutron?

I'm under the impression that GBP is adding a higher level abstraction
but that it's not ripping basic constructs like network/subnet/port out
of the existing API. If I'm wrong about that I'll have to change my
opinion. We need those fundamental networking constructs to be present
and accessible to users that want/need to deal with them. I'm viewing
GBP as just a higher level abstraction over the top.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Is network ordering of vNICs guaranteed?

2014-08-08 Thread CARVER, PAUL
I'm hearing friend of a friend that people have looked at the code and 
determined that the order of networks on a VM is not guaranteed. Can anyone 
confirm whether this is true? If it is true, is there any reason why this is 
not considered a bug? I've never seen it happen myself.

To elaborate, I'm being told that if you create some VMs with several vNICs on 
each and you want them to be, for example:


1)  Management Network

2)  Production Network

3)  Storage Network

You can't count on all the VMs having eth0 connected to the management network, 
eth1 on the production network, eth2 on the storage network.

I'm being told that they will come up like that most of the time, but sometimes 
you will see, for example, a VM might wind up with eth0 connected to the 
production network, eth1 to the storage network, and eth2 connected to the 
storage network (or some other permutation.)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to improve the specs review process (was Re: [Neutron] Group Based Policy and the way forward)

2014-08-06 Thread CARVER, PAUL
On Aug 6, 2014, at 2:01 PM, Mohammad Banikazemi 
m...@us.ibm.commailto:m...@us.ibm.com
 wrote:

Yes, indeed.
I do not want to be over dramatic but the discussion on the original Group
Based Policy and the way forward thread is nothing short of heartbreaking.
After months and months of discussions, three presentations at the past three
summits, a design session at the last summit, and (most relevant to this
thread) the approval of the spec, why are we talking about the merits of the
work now?

I understand if people think this is not a good idea or this is not a good
time. What I do not understand is why these concerns were not raised clearly
and openly earlier.

I have to agree here. I'm not sure whether my organization needs GBP or not.
It's certainly not our top priority for Neutron given a variety of other more
important functional gaps. However, I saw their demo at the summit and it was
clear that a lot of work had gone into it even before Icehouse. From the demo
it was clearly a useful enhancement to Neutron even if it wasn't at the top
of my priority list.

For people to be asking to justify the why this far into the Juno cycle
when the spec was approved and the code was demoed at the summit really
brings the OpenStack process into question. It's one thing to discuss
technical merits of contributions but it's totally different to pull the rug
out from under a group of contributors at the last minute after such a long
period of development, discussion, and demo.

Seeing this sort of last minute rejection of a contribution after so much
time has been invested in it could very easily have a chilling effect on
contributors.

~

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-24 Thread CARVER, PAUL

Alan Kavanagh wrote:

If we have more work being put on the table, then more Core members would
definitely go a long way with assisting this, we cant wait for folks to be
reviewing stuff as an excuse to not get features landed in a given release.

Stability is absolutely essential so we can't force things through without
adequate review. The automated CI testing in OpenStack is impressive, but
it is far from flawless and even if it worked perfectly it's still just
CI, not AI. There's a large class of problems that it just can't catch.

I agree with Alan that if there's a discrepancy between the amount of code
that folks would like to land in a release and the number of core member
working hours in a six month period then that is something the board needs
to take an interest in.

I think a friendly adversarial approach is healthy for OpenStack. Specs and
code should need to be defended, not just rubber stamped. Having core
reviewers critiquing code written by their competitors, suppliers, or vendors
is healthy for the overall code quality. However, simply having specs and
code not get reviewed at all due to a shortage of core reviewers is not
healthy and will limit the success of OpenStack.

I don't really follow Linux kernel development, but a quick search turned
up [1] which seems to indicate at least one additional level between
developer and core (depending on whether we consider Linus and Andrew levels
unto themselves and whether we consider OpenStack projects as full systems
or as subsystems of OpenStack.

Speaking only for myself and not ATT, I'm disappointed that my employer
doesn't have more developers actively writing code. We ought to (in my
personal opinion) be supplying core reviewers to at least a couple of
OpenStack projects. But one way or another we need to get more capabilities
reviewed and merged. My personal top disappointments are with the current
state of IPv6, HA, and QoS, but I'm sure other folks can list lots of other
capabilities that they're really going to be frustrated to find lacking in Juno.

[1] 
http://techblog.aasisvinayak.com/linux-kernel-development-process-how-it-works/



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] Support Stateful and Stateless DHCPv6 by dnsmasq

2014-07-24 Thread CARVER, PAUL
Collins, Sean wrote:

 On Wed, Jul 23, 2014 at 12:06:06AM EDT, Xu Han Peng wrote:
 I would like to request one Juno Spec freeze exception for Support Stateful
 and Stateless DHCPv6 by dnsmasq BP.
  
 The spec is under review:
 https://review.openstack.org/#/c/102411/
 
 Code change for this BP is submitted as well for a while:
 https://review.openstack.org/#/c/106299/

I'd like to +1 this request, if this work landed in Juno, this would
mean Neutron would have 100% support for all IPv6 subnet attribute
settings, since slaac support landed in J-2.

+1 on this from me too. It's getting more and more difficult to keep
claiming OpenStack will have IPv6 any day now and not having it
in Juno will hurt credibility a lot.

IPv4 address space is basically gone. ATT has a fair amount of it
but even we're feeling the pinch. A lot of companies have it worse.
NAT is a mediocre stop-gap at best.
We've been running IPv6 in production for well over a year.

Our pre-OpenStack environments support IPv6 where needed
even though we have a lot of IPv4 running where we aren't feeling
immediate pressure. We're having to turn internal applications away
from our OpenStack based cloud because they require IPv6 and we
can't provide it.

We're actively searching for workarounds but none of them are
attractive.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread CARVER, PAUL

Andrew Mann wrote:
What's the use case for an IPv6 endpoint? This service is just for instance 
metadata,
so as long as a requirement to support IPv4 is in place, using solely an IPv4 
endpoint
avoids a number of complexities:

The obvious use case would be deprecation of IPv4, but the question is when. 
Should I
expect to be able to run a VM without IPv4 in 2014 or is IPv4 mandatory for all 
VMs?
What about the year 2020 or 2050 or 2100? Do we ever reach a point where we can 
turn
off IPv4 or will we need IPv4 for eternity?

Right now it seems that we need IPv4 because cloud-init itself doesn’t appear 
to support
IPv6 as a datasource. I’m going by this documentation
http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#what-is-a-datasource
where the “magic ip” of 169.254.169.254 is referenced as well as some non-IP 
mechanisms.

It wouldn’t be sufficient for OpenStack to support an IPv6 metadata address as 
long as
most tenants are likely to be using a version of cloud-init that doesn’t know 
about IPv6
so step one would be to find out whether the maintainer of cloud-init is open 
to the
idea of IPv4-less clouds.

If so, then picking a link local IPv6 address seems like the obvious thing to 
do and the
update to Neutron should be pretty trivial. There are a few references to that
“magic ip”
https://github.com/openstack/neutron/search?p=2q=169.254.169.254ref=cmdform
but the main one is the iptables redirect rule in the L3 agent:
https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L684


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using tmux instead of screen in devstack

2014-07-01 Thread CARVER, PAUL
Anant Patil wrote:
I use tmux (an alternative to screen) a lot and I believe lot of other 
developers use it.
I have been using devstack for some time now and would like to add the option 
of
using tmux instead of screen for creating sessions for openstack services.
I couldn't find a way to do that in current implementation of devstack.

Is it just for familiarity or are there specific features lacking in screen 
that you think
would benefit devstack? I’ve tried tmux a couple of times but didn’t find any
compelling reason to switch from screen. I wouldn’t argue against anyone who
wants to use it for their day to day needs. But don’t just change devstack on a 
whim,
list out the objective benefits.

Having a configuration option to switch between devstack-screen and 
devstack-tmux
seems like it would probably add more complexity than benefit, especially if 
there
are any functional differences. If there are functional differences it would be 
better
to decide which one is best (for devstack, not necessarily best for everyone in 
the world)
and go with that one only.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-26 Thread CARVER, PAUL






 Original message 
From: Yi Sun beyo...@gmail.com
Date:
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] DVR SNAT shortcut




Yi wrote:
+1, I had another email to discuss about FW (FWaaS) and DVR integration. 
Traditionally, we run firewall with router so that firewall can use route and 
NAT info from router. since DVR is asymmetric when handling traffic, it is hard 
to run stateful firewall on top of DVR just like a traditional firewall does . 
When the NAT is in the picture, the situation can be even worse.
Yi


Don't forget logging either. In any security concious environment , 
particularly any place with legal/regulatory/contractual audit requirements a 
firewall that doesn't keep full logs of all dropped and passed sessions is 
worthless.

Stateless packet dropping doesn't help at all when conducting forensics on an 
attack that is already known to have occured.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] High bandwidth routers

2014-06-23 Thread CARVER, PAUL
Is anyone using Neutron for high bandwidth workloads? (for sake of discussion 
let's high = 50Gbps or greater)

With routers being implemented as network namespaces within x86 servers it 
seems like Neutron networks would be pretty bandwidth constrained relative to 
real routers.

As we start migrating the physical connections on our physical routers from 
multiple of 10G to multiples of 100G, I'm wondering if Neutron has a clear 
roadmap towards networks where the bandwidth requirements exceed what an x86 
box can do.

Is the thinking that x86 boxes will soon be capable of 100G and multi-100G 
throughput? Or does DVR take care of this by spreading the routing function 
over a large number of compute nodes so that we don't need to channel 
multi-100G flows through single network nodes?

I'm mostly thinking about WAN connectivity here, video and big data 
applications moving huge amounts of traffic into and out of OpenStack based 
datacenters.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Can tenants provide hints to router scheduling?

2014-06-13 Thread CARVER, PAUL
Suppose a tenant knows that some of their networks are particularly high 
bandwidth and others are relatively low bandwidth.

Is there any mechanism that a tenant can use to let Neutron know what sort of 
bandwidth is expected through a particular router?

I'm concerned about the physical NICs on some of our network nodes getting 
saturated if several virtual routers that end up on the same network node 
happen to be serving multi -Gbps networks.

I'm looking through 
https://github.com/openstack/neutron/blob/master/neutron/scheduler/l3_agent_scheduler.py
 and it appears the only choices are ChanceScheduler which just calls 
random.choice and LeastRoutersScheduler which appears to make its decision 
based on simple quantity of routers per L3 agent.

Are there any blueprints or WIP for taking bandwidth utilization into account 
when scheduling routers?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Complex Decision Arround The Most Well-Known CMPs

2014-06-05 Thread CARVER, PAUL
hossein zabolzadeh wrote:
I want to fully virtualize my datacenter. I don't want cloud. I don't want to 
deliver
public cloud. I have several legacy appliacation that I want to run all of 
them on
virtualized environment. I want to leverage the virtualization technology to 
improve
my datacenter consolidation, ease of meintenace, ease of management with
increase in capacity.


If you don’t want cloud and don’t need cloud then don’t look at CloudStack or 
OpenStack. There’s no need to be buzzword compliant. Just use ESXi (with or 
without vCenter), or use KVM or Xen directly. After you’ve virtualized your 
infrastructure either one of two things will happen:

1) You’ll realize that you really did want cloud, but now you’ll understand why 
and you won’t just be doing it for buzzword’s sake

or

2) You’ll be perfectly happy because you were actually right that all you 
wanted was to virtualize a few servers. You’re done and you didn’t make 
yourself miserable by implementing something you really didn’t want or need

Not everybody needs a cloud platform. Certainly not everybody needs to build 
their own cloud platform internal to their company. If you’re IT needs are big 
enough and rapidly changing enough to actually need cloud then you wouldn’t be 
posting to an OpenStack mailing list that “I don’t want cloud” unless of course 
you don’t actually know your IT needs well enough. And if you don’t know your 
IT needs well enough to understand on a technical basis why you need OpenStack 
(or CloudStack or vCloud or etc) then you need to go back and figure out your 
own IT needs before proceeding further.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [neutron] blueprint ovs-firewall-driver: OVS implementation of security groups

2014-06-03 Thread CARVER, PAUL

Amir Sadoughi wrote:

Specifically, OVS lacks connection tracking so it won't have a RELATED feature 
or stateful rules
for non-TCP flows. (OVS connection tracking is currently under development, to 
be released by 2015

It definitely needs a big obvious warning label on this. A stateless firewall 
hasn't been acceptable in serious
security environments for at least a decade. Real firewalls do things like 
TCP sequence number validation
to ensure that someone isn't hi-jacking an existing connection and TCP flag 
validation to make sure that someone
isn't fuzzing by sending invalid combinations of flags in order to uncover 
bugs in servers behind the firewall.


- debugging OVS is new to users compared to debugging old iptables

This one is very important in my opinion. There absolutely needs to be a 
section in the documentation
on displaying and interpreting the rules generated by Neutron. I'm pretty sure 
that if you tell anyone
with Linux admin experience that Neutron security groups are iptables based, 
they should be able to
figure their way around iptables -L or iptables -S without much help.

If they haven't touched iptables in a while, five minutes reading man 
iptables should be enough
for them to figure out the important options and they can readily see the 
relationship between
what they put in a security group and what shows up in the iptables chain. I 
don't think there's
anywhere near that ease of use on how to list the OvS ruleset for a VM and see 
how it corresponds
to the Neutron security group.


Finally, logging of packets (including both dropped and permitted connections) 
is mandatory in many
environments. Does OvS have the ability to do the necessary logging? Although 
Neutron
security groups don't currently enable logging, the capabilities are present in 
the underlying
iptables and can be enabled with some work. If OvS doesn't support logging of 
connections then
this feature definitely needs to be clearly marked as not a firewall 
substitute so that admins
are clearly informed that they still need a real firewall for audit 
compliance and may only
consider OvS based Neutron security groups as an additional layer of protection 
behind the
real firewall.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3] VM Scheduling v/s Network as input any consideration ?

2014-05-30 Thread CARVER, PAUL
Mathieu Rohon wrote:

I'm also very interested in scheduling VMs with Network requirement. This 
seems to be in the scope of NFV workgroup [1].
For instance, I think that scheduling should take into account bandwith/QoS 
requirement for a VM, or specific Nic
This falls in my area of interest as well. We’re working on making network 
quality of service guarantees by means of a combination of DSCP marking with a 
reservation database and separate hardware queues in physical network switches 
in order to ensure that the reservations granted don’t exceed the wire speed of 
the switches. Right now the only option if the total of requested reservations 
would exceed the wire speed of the switches is to deny reservations on the 
basis of “first come, first served” and “last come, doesn’t get served”, in 
other words simply issuing a failure response at reservation time to any 
tenants who attempt to make reservations after a particular switch port is 
maxed out (from a reservation perspective, not necessarily maxed out from an 
actual utilization perspective at any given moment.)
However, with the element of chance in VM scheduling to compute node, it’s 
possible that a tenant could get a deny response from the reservation server 
because their VM landed on a particularly reservation heavy rack. If their VM 
happened to land on a compute node in a different rack then there might well be 
plenty of excess bandwidth on that rack’s uplink. But our current 
implementation has no way to tell Nova or the tenant that a reservation that 
was denied could have been granted if the VM were relocated to a less network 
overloaded rack.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-23 Thread CARVER, PAUL
Mohammad Banikazemi wrote:


in Atlanta the support was overwhelmingly positive in my opinion. I just 
wanted to make sure this does not get lost in our discussions.


Absolutely. I hadn't been following the group policy discussions prior to the 
summit but I was very impressed with what I saw and heard.

to in particular discuss the possibility of making the code less tightly 
coupled with Neutron core.

+1 to making it less tightly coupled (although I haven't been inside the code 
to have an opinion on how tightly coupled it is now)

Let's keep in mind OSI-like layers and well defined interfaces between them. 
Coming from a hardware networking background I find it very convenient to think 
in terms of ports, networks, subnets and routers. Those concepts should 
continue to be basic building blocks of software defined networks. The layer 4+ 
stuff should be added on top with clean interfaces that don't entangle 
functionality up and down the stack.

Strict OSI layer compliance has never been a great success, but the general 
concept has been very useful for a long time All the most painful protocols for 
a network person to deal with are the ones like SIP where clean separation of 
layers was indiscriminately  violated.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Manual VM migration

2014-05-21 Thread CARVER, PAUL
Are you sure steps 1 and 2 aren’t in the wrong order? Seems like if you’re 
going to halt the source VM you should take your snapshot after halting. (Of 
course if you don’t intend to halt the VM you can just do your best to quiesce 
your most active writers before taking the snapshot and hope the disk is 
sufficiently consistent.)


1) Take a snapshot of the VM from the source Private Cloud
2) Halts the source VM (optional, but good for state consistency)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Port mirroring

2014-05-16 Thread CARVER, PAUL

Did anything interesting come out of the port mirroring discussion in the 
Neutron pod this morning?

Through a failure to hear the alert from my phone I completely forgot to show 
up.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gerrit rst

2014-05-16 Thread CARVER, PAUL

When looking at a change in Gerrit that includes an rst file, is there any easy 
way to view the rendered view rather than merely the markup view? The side by 
side diff is great, but I'd really like a clickable link to the rendered view, 
especially for ones that include nwdiag or blockdiag syntax.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring Extension in Neutron

2014-05-15 Thread CARVER, PAUL
Port mirroring is definitely a topic that I hear coming from network 
operations. The netops folks are accustomed to having sniffers all over the 
place and being able to span switch ports as a first step in network 
troubleshooting. The concern about how do I span a port is one of the first 
questions a netops person wants answered when we talk about cloud.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] B203 table 6 for Neutron//Re: SDN NBI Core APIs consumed by OpenStack: Wednesday May 14th at 10:30-11am in the developer lounge at 3rd floor

2014-05-14 Thread CARVER, PAUL
Tina,

That was a good conversation. Would you be available for some additional 
followup on the L3 VPN topic at 4:00 today? I have a coworker who wasn't 
available for the discussion earlier today.






 Original message 
From: Tina TSOU tina.tsou.zout...@huawei.com
Date:
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Cc: paul.car...@att.com,mmccl...@yahoo-inc.com,Louis.Fourie 
louis.fou...@huawei.com
Subject: Re: [openstack-dev] B203 table 6 for Neutron//Re: SDN NBI Core APIs 
consumed by OpenStack: Wednesday May 14th at 10:30-11am in the developer lounge 
at 3rd floor


Dear all,

Below is the main conclusion from this meeting.

We will work on the following SDN NBI Core APIs at the priority per Neutron's 
interest.
2, 7, 10, 9, 11.

1.2 APIs for connection between OpenStack Neutron and controller

OpenStack is widely used and deployed in cloud scenarios.OpenStack-based data 
center is becoming mainstream.

There should be APIs for connection between SDN controller and OpenStack 
Neutron.



1.7 APIs for Virtual-Tenant-Network (VTN)

VTN allows users and developers to design and deploy virtual networks without 
the need to know the physical network. This is very useful in data center.

There should be APIs for virtual tenant network.


1.10 APIs for QoS

QoS is usually for end user application. For example, the UC-SDN-Use-Case needs 
the network to guarantee its flow QoS to improve the user’s QoE.

There should be APIs for QoS.


1.9 APIs for VPN

VPN is also widely use in enterprise network, interconnectionbetween data 
centers and mobile environments.

The management and operation of VPN are necessary. There should be APIs for VPN.

The VPN may include the following type

L2 VPN

L3 VPN


1.11 APIs for network stats/state

The network stats/state is needed by application so that the application can 
react with the corresponding policy.

There should be APIs for network stats/state.


Thank you,
Tina

On May 14, 2014, at 10:00 AM, Tina TSOU 
tina.tsou.zout...@huawei.commailto:tina.tsou.zout...@huawei.com wrote:

Dear all,

Place is changed to B203 table 6 for Networking (Neutron), Design Summit Pod 
area.


Thank you,
Tina

On May 13, 2014, at 10:00 PM, Tina TSOU 
tina.tsou.zout...@huawei.commailto:tina.tsou.zout...@huawei.com wrote:

Dear Stackers,

We are setting up a meeting to SDN NBI Core APIs consumed by OpenStack. 
Attached is the material for your reading pleasure.

The meeting is planned for:
Wednesday May 14th at 10:30-11am in the developer lounge at 3rd 
floor .
Look forward to meeting many of you then.


Thank you,
Tina

NBI Core APIs.docx
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

2014-05-14 Thread CARVER, PAUL
I'm planning to go to the neutron policy session at 1:30 but I'd like to find a 
chance to meet you and say hi. I'll be at the summit through Friday.





 Original message 
From: Luke Gorrie l...@tail-f.com
Date:
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Cc: Ian Wells (iawells) iawe...@cisco.com
Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit


Can't wait :-).

On 14 May 2014 19:06, Chris Wright chr...@redhat.com wrote:
 Thursday at 1:30 PM in the Neutron Pod we'll do
 an NFV BoF.  If you are at design summit and
 interested in Neutron + NFV please come join us.

 thanks,
 -chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][nova][Neutron] Launch VM with multiple Ethernet interfaces with I.P. of single subnet.

2014-04-17 Thread CARVER, PAUL

Akihiro Motoki wrote:

To cope with such cases, allowed-address-pairs extension was implemented.
http://docs.openstack.org/api/openstack-network/2.0/content/allowed_address_pair_ext_ops.html


Question on this in particular: Is a tenant permitted to do this? If so, what 
exactly is the iptables rule accomplishing? If the intent was to prevent the 
tenant from spoofing someone else's IP then forcing the tenant to take an extra 
step of making an API call prior to attempting to spoof doesn't really stop 
them.

Question in general: Is there an easy way to see the whole API broken out by 
privilege level? I'd like to have a clear idea of all the functionality that 
requires a cloud operator/admin to perform vs the functionality that a tenant 
can perform. Obviously Horizon looks different for an admin than it does for a 
tenant, but I'm not as clear on how to identify differences in the API.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][nova][Neutron] Launch VM with multiple Ethernet interfaces with I.P. of single subnet.

2014-04-17 Thread CARVER, PAUL
Aaron Rosen wrote:

Sorry not really. It's still not clear to me why multiple nics would be 
required on the same L2 domain.

I’m a fan of this old paper for nostalgic reasons 
http://static.usenix.org/legacy/publications/library/proceedings/neta99/full_papers/limoncelli/limoncelli.pdf
 but a search for transparent or bridging firewall turns up tons of hits.

Whether any of them are valid use cases for OpenStack is something that we 
could debate, but the general concept of putting two firewall interfaces into 
the same L2 domain and using it to control traffic flow between different hosts 
on the same L2 domain has at least five years of history behind it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Heat] The Neutron API and orchestration

2014-04-08 Thread CARVER, PAUL
Zane Bitter wrote:

(1) Create a network
Instinctively, I want a Network to be something like a virtual VRF 
(VVRF?): a separate namespace with it's own route table, within which 
subnet prefixes are not overlapping, but which is completely independent 
of other Networks that may contain overlapping subnets. As far as I can 
tell, this basically seems to be the case. The difference, of course, is 
that instead of having to configure a VRF on every switch/router and 
make sure they're all in sync and connected up in the right ways, I just 
define it in one place globally and Neutron does the rest. I call this 
#winning. Nice work, Neutron.

This is your main misunderstanding and the source of most, but not all
of the rest of your issues. A network in Neutron is NOT equivalent
to a VRF. A network is really just a single LAN segment (i.e. a single
broadcast domain.) It allows the use of multiple subnets on the same
broadcast domain, which is generally not a great idea, but doesn't
violate any standards and is sometimes useful.

There is no construct in Neutron to represent an entire
network in the sense that most networking people use the word
(i.e., multiple broadcast domains interconnected via routers.)

A router in Neutron also doesn't really represent the same thing
that most networking people mean by the word, at least not yet.
A router in Neutron is basically a NAT box like a home Linksys/
Netgear/etc, not a Cisco ASR or Juniper M or T series. Most notably
it doesn't run routing protocols. It doesn't handle route redistribution,
it doesn't handle queuing and QoS, ACL support is only preliminary, etc.

So your expectation of being able to orchestrate a real network in
the sense of a collection of LAN segments and routers and global
routing tables and topology isn't native to Neutron. So the question
is whether that overarching orchestration should be in Heat using
only the primitives that Neutron currently provides or whether
Neutron should be extended to include entire networks in the
sense that you and I would tend to define the word.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-28 Thread CARVER, PAUL
Jay Pipes wrote: 
I'm proposing getting rid of the host aggregate hack (or maybe evolving
it?) as well as the availability zone concept and replacing them with a
more flexible generic container object that may be hierarchical in
nature.

Is the thing you're proposing to replace them with something that already
exists or a brand new thing you're proposing should be created?

We need some sort of construct that allows the tenant to be confident that
they aren't going to lose multiple VMs simultaneously due to a failure of
underlying hardware. The semantics of it need to be easily comprehensible
to the tenant, otherwise you'll get people thinking they're protected because
they built a redundant pair of VMs but sheer bad luck results in them losing
them both at the same time.

We're using availability zone for that currently and it seems to serve the
purpose in a way that's easy to explain to a tenant.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] ESXi in production with Neutron

2014-03-14 Thread CARVER, PAUL
Is anyone using ESXi in production with Neutron networking?

We currently run a production OpenStack cloud with KVM only using OvS+GRE 
networking. We're in the process of evaluating a number of commercial SDN 
vendors but we're being asked to support ESXi in the really near timeframe 
before we've had sufficient time to complete an objective evaluation of all the 
available vendors who would like to sell us an SDN solution.

If anyone is running an OpenStack cloud that includes ESXi hypervisors but does 
not rely on a commercial SDN vendor to make it work I'd love to hear about it. 
I'm not eager to bypass the vendor evaluation process and randomly pick a 
vendor without an objective and fair evaluation, but I'm finding it hard to 
locate any documentation that indicates that ESXi is compatible with OpenStack 
(and specifically Neutron networking) unless you also select one of the 
commercial SDN vendors to bridge the gaps.

If we could add ESXi nodes to our current OpenStack zones, even if there were 
some known functional shortcomings, it would give us some breathing room on 
evaluating SDN vendors in a diligent manner.

--
Paul Carver
VO: 732-545-7377
Cell: 908-803-1656
E: pcar...@att.commailto:pcar...@att.com
Q Instant Messageqto://talk/pc2929

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] ESXi in production with Neutron

2014-03-14 Thread CARVER, PAUL
Dan Wendlandt wrote:

In my experience working with customers, people typically either dip their toe 
in to start by just running ESXi/vSphere with
nova-network, or they have decided on NSX from the get-go and start with 
Neutron.   If you're just looking for nova-network,
VOVA makes it really easy to do a basic nova-network setup: 
https://communities.vmware.com/docs/DOC-24626

We’re running Quantum (in Grizzly) with OvS+GRE overlay networking with KVM 
only. NSX and VMWare/Nicira are among the SDN vendors who claim to be able to 
integrate ESXi with KVM in a multi-hypervisor OpenStack cloud, but they’re not 
the only one. There are a number of other vendors who claim they can also 
integrate ESXi with KVM in a common virtual network.

We’re not ready to just decide to go with VMWare/Nicira without first 
evaluating all the options.

However, we’re getting some pressure to support ESXi in a timeframe that 
doesn’t allow for completing the SDN evaluation. I’m trying to figure out if we 
can integrate ESXi into our KVM based OvS+GRE/Neutron (post-upgrade from 
Grizzly) cloud in the interim, or if we need to push back and say that we 
simply can’t support ESXi in the requested timeframe.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-12 Thread CARVER, PAUL
I have personally witnessed someone (honestly, not me) select Terminate 
Instance when they meant Reboot Instance and that mistake is way too easy. 
I'm not sure if it was a brain mistake or mere slip of the mouse, but it's 
enough to make people really nervous in a production environment. If there's 
one thing you can count on about human beings, it's that they'll make mistakes 
sooner or later. Any system that assumes infallible human beings as a design 
criteria is making an invalid assumption.

-- 
Paul Carver
VO: 732-545-7377
Cell: 908-803-1656
E: pcar...@att.com
Q Instant Message


-Original Message-
From: Tim Bell [mailto:tim.b...@cern.ch] 
Sent: Tuesday, March 11, 2014 15:43
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft 
deletion (step by step)


Typical cases are user error where someone accidentally deletes an item from a 
tenant. The image guys have a good structure where images become unavailable 
and are recoverable for a certain period of time. A regular periodic task 
cleans up deleted items after a configurable number of seconds to avoid 
constant database growth.

My preference would be to follow this model universally (an archive table is a 
nice way to do it without disturbing production).

Tim


 On Tue, Mar 11, 2014, Mike Wilson geekinu...@gmail.com wrote:
  Undeleting things is an important use case in my opinion. We do this
  in our environment on a regular basis. In that light I'm not sure that
  it would be appropriate just to log the deletion and git rid of the
  row. I would like to see it go to an archival table where it is easily 
  restored.
 
 I'm curious, what are you undeleting and why?
 
 JE
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-06 Thread CARVER, PAUL
James E. Blair [mailto:jebl...@openstack.org] wrote:

significant amount of time chasing bots.  It's clear that Freenode is
better able to deal with attacks than OFTC would be.  However, OFTC
doesn't have to deal with them because they aren't happening; and that's
worth considering.

Does anyone have any idea who is being targeted by the attacks?
I assume they're hitting Freenode as a whole, but presumably the motivation
is one or more channels as opposed to just not liking Freenode in principle.

Honestly I tried IRC in the mid-nineties and didn't see the point (I spent all
my free time reading Usenet (and even paid for Agent at one point after
switching from nn on SunOS to Free Agent on Windows)) and never found
any reason to go back to IRC until finding out that OpenStack's world
revolves around Freenode. So I was only distantly aware of the battlefield
of DDoSers trying to cause netsplits in order to get ops on contentious
channels.

Is there any chance that OpenStack is the target of the DDoSers? Or do
you think there's some other target on Freenode and we're just
collateral damage?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-13 Thread CARVER, PAUL
Julien Vey wrote:

About Gerrit, I think it is also a little too much. Many users have their
 own reviewing system, Pull requests with github, bitbucket or stash,
their own instance of gerrit, or even a custom git workflow.
Gerrit would be a great feature for future versions of Solum. but only
as an optionnal one, we should not force people into it.

I'm just an observer since I haven't managed to negotiate the CLA hurdle with 
my employer yet, but Gerrit seems to me to work fantastically well.

If there are better options than Gerrit that people are using I'd be interested 
in hearing about them. I'm always interested in learning who has the best in 
class tool for any particular task. However I think that having multiple tools 
for the same job within OpenStack is going to be a bad idea that results in 
confusion and difficulty in cooperation.

Now, if the intention is for Solum to NOT be an OpenStack project that's fine. 
OpenStack can use and be used by lots of projects that aren't part of it. But 
if someone learns the tools and processes for one OpenStack project they ought 
to be able to jump right into any other OpenStack project without having to 
worry about what code review tool, or other tools or processes are different 
from one to the next.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Maintaining support for the Tail-f NCS mech driver in Icehouse

2014-02-09 Thread CARVER, PAUL
Kyle Mestery wrote: 

So, in general I don't think this will fly because it's my understanding the
OpenStack servers only test fully open source code. Allowing a third party
vendor system to run on the OpenStack servers as part of any functional
testing would open an entirely new can of worms here. I would suggest
asking this question on #openstack-infra as well for clarity since I don't see
a response on the mailing list yet.

How does the current testing work with any of the hardware drivers?
I just read Jay Pipe's excellent blog post [1] on the general setup and
function of the CI system, but it only explained the software parts.

I could not extrapolate from the article anything about how it works
In the context of Neutron drivers that are supposed to configure
physical networking hardware or even software components
such as Nicira's or PlugGrid's gateways.


[1] http://www.joinfu.com/2014/01/understanding-the-openstack-ci-system/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Why not allow to create a vm directly with two VIF in the same network

2014-01-30 Thread CARVER, PAUL
Vishvananda Ishaya wrote:

In testing I have been unable to saturate a 10g link using a single VM. Even 
with multiple streams,
the best I have been able to do (using virtio and vhost_net is about 7.8g.

Can you share details about your hardware and vSwitch config (possibly off list 
if that isn't a valid openstack-dev topic)

I haven't been able to spend any time on serious performance testing, but just 
doing preliminary testing on a HP BL460cG8 and Virtual Connect I haven't been 
able to push more than about 1Gbps using a pretty vanilla Havana install with 
OvS and VLANs (no GRE).

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Why not allow to create a vm directly with two VIF in the same network

2014-01-27 Thread CARVER, PAUL
Lingxian Kong wrote:

Actually, in the scenario of NFV, all the rules or behaviors of the physical 
world will apply to that in the virtual world, right?

IMHO, despite of the scenarios, we should at least guarantee the consistency 
of creating vms with nics and attaching nics .

I'll need to think about that a bit before I decide whether I agree. My gut 
response is to disagree with it as a blanket statement while allowing that it 
may apply in specific scenarios.
The point about PCI passthrough in another post was a good one. If the VM is 
managing physical NICs then LACP at the VM level would make sense. But if the 
VM is using virtualized NICs there's at least some possibility that the 
underlying connectivity from the vNIC to the physical network is going over an 
LACP bundle of multiple NICs handled at the hypervisor level. At least that's 
the way we're doing it.
Running LACP over multiple vNICs just seems wrong to me. Increasing 
availability doesn't simply mean adding two (or more) of everything. Sometimes 
adding more things reduces availability. There needs to be at least one 
specific failure scenario where thing 2 can be expected to continue working 
when thing 1 fails. If all the failure modes are correlated (i.e. whatever 
caused thing 1 to fail almost certainly would also cause thing 2 to fail 
simultaneously) then having one thing would be better than two.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-27 Thread CARVER, PAUL
Joshua Harlow wrote:
From what I know most all (correct me if I am wrong) open source projects
don't translate log messages; so it seems odd to be the special snowflake
project/s.

Do people find this type of translation useful?

It'd be nice to know how many people really do so the benefit/drawbacks of
doing it can be evaluated by real usage data.

This is just a wild idea off the top of my head, but what about creating a log
translation service completely independent of running systems. Basically
I'm thinking of a web based UI hosted on openstack.org where you can
upload a logfile or copy/paste log lines and receive it back in the language
of your choice.

When googling for an error message it would definitely be better for
everyone to be using the same language because otherwise you'll
only find forum posts and so forth in your own language and probably
miss a solution posted in another language.

But I can certainly see that some people might actually be able to figure
out problems for themselves if they could see the error in their native
language.

So my idea is to log messages in English but create a standard way to
get a translated version.

As a side effect, the usage of the web based translator server would
give a way to answer your question about how many people use it.
If it doesn't get any usage then people can stop investing the time in
creating the translations.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-27 Thread CARVER, PAUL
Jay Pipes wrote:

Have you ever tried using Google Translate for anything more than very
simple phrases?

The results can be... well, interesting ;) And given the amount of
technical terms used in these messages, I doubt GT or any automated
translating service would provide a whole lot of value...

Exactly what I wasn't suggesting and why I wasn't suggesting it. I meant
an OpenStack specific translation service taking advantage of the work
that the translators have already done and any work they do in the future.

I haven't looked at any of the current translation code in any OpenStack
project, but I presume there's basically a one to one mapping of English
messages to each other available language (maybe with rearrangement
of parameters to account for differences in grammar?)

I'd be surprised and impressed if the translators are applying some sort
of context sensitivity such that a particular English string could end up
getting translated to multiple different strings depending on something
that isn't captured in the English log message.

So basically instead of doing the search and replace of the static text
of each message before writing to the logfile, write the message to
the log in English and then have a separate process (I proposed web
based, but it could be as simple as a CLI script) to search and replace
the English with the desired target language after the fact.

If there's still a concern about ambiguity where you couldn't identify the
correct translation based only on knowing the original English static
text, then maybe it would be worth assigning unique ID numbers
to every translatable message so that it can be mapped uniquely
to the corresponding message in the target language.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Why not allow to create a vm directly with two VIF in the same network

2014-01-24 Thread CARVER, PAUL
I agree that I'd like to see a set of use cases for this. This is the second 
time in as many days that I've heard about a desire to have such a thing but I 
still don't think I understand any use cases adequately.

In the physical world it makes perfect sense, LACP, MLT, 
Etherchannel/Portchannel, etc. In the virtual world I need to see a detailed 
description of one or more use cases.

Shihanzhang, why don't you start up an Etherpad or something and start putting 
together a list of one or more practical use cases in which the same VM would 
benefit from multiple virtual connections to the same network. If it really 
makes sense we ought to be able to clearly describe it.

--
Paul Carver
VO: 732-545-7377
Cell: 908-803-1656
E: pcar...@att.commailto:pcar...@att.com
Q Instant Messageqto://talk/pc2929

From: Day, Phil [mailto:philip@hp.com]
Sent: Friday, January 24, 2014 09:11
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova]Why not allow to create a vm directly with 
two VIF in the same network

I agree its oddly inconsistent (you'll get used to that over time ;-)  - but to 
me it feels more like the validation is missing on the attach that that the 
create should allow two VIFs on the same network.   Since these are both 
virtualised (i.e share the same bandwidth, don't provide any additional 
resilience, etc) I'm curious about why you'd want two VIFs in this 
configuration ?

From: shihanzhang [mailto:ayshihanzh...@126.com]
Sent: 24 January 2014 03:22
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova]Why not allow to create a vm directly with two 
VIF in the same network

I am a beginer of nova, there is a problem which has confused me, in the latest 
version, it not allowed to create a vm directly with two VIF in the same 
network, but allowed to add a VIF that it network is same with a existed 
VIF'network, there is the use case that a vm with two VIF in the same network, 
but why not allow to create the vm directly with two VIF in the same network?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-23 Thread CARVER, PAUL
Paul Ward:

Thank you to all who have participated in this thread.  I've just proposed a 
fix in gerrit.  For those involved thus far, if you could review I would be 
greatly appreciative!

https://review.openstack.org/#/c/68742/1

I wouldn't go so far as to say this verification SHOULDN'T be added, but 
neither would I say it should. From a general use case perspective I don't 
think IPv4 subnets smaller than /29 make sense. A /32 is a commonly used subnet 
length for some use cases (e.g. router loopback interface) but may not have an 
applicable use in a cloud network. I have never seen a /31 network used 
anywhere. Point to point links (e.g. T1/Frame Relay/etc) are often /30 but I've 
never seen a /30 subnet for anything other than connecting two routers.

However, does it really benefit the user to specifically block them from 
entering /32 or block them from entering /30, /31, and /32?

It might not be an equal amount of code, I think a much better effort to help 
the user would be to provide them with a subnet calculator directly in Horizon 
to show them how many usable IPs are in the subnet they're defining. In this 
case, displaying Usable addresses: 0 right when they enter /32 would be 
helpful and they would figure out for themselves whether they really wanted 
that mask or if they meant something else?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-23 Thread CARVER, PAUL
Paul Ward wrote:

Given your statement about routers potentially using a /30 network, I think we 
should leave the restriction at /30
rather than /29.  I'm assuming your statement that some routers use /30 
subnets to connect to each other could
potentially apply to neutron-created routers.

Generally speaking I’ve only seen /30 on point to point links. For most 
purposes the network number (all zeros host part) and broadcast (all ones host 
part) are not valid host addresses so a /30 only has two usable addresses. This 
is perfect on a point to point link which only has two endpoints, but kind of 
silly on a broadcast domain.

However, what is the user perceived impact of your code? Will the error message 
propagate back to the user directly or do they simply get an unexplained 
failure while the explanatory error message gets dumped into a log file which 
is visible only to the provider, not the tenant?

Putting a friendly helper in Horizon will help novice users and provide a good 
example to anyone who is developing an alternate UI to invoke the Neutron API. 
I’m not sure what the benefit is of putting code in the backend to disallow 
valid but silly subnet masks. I include /30, /31, AND /32 in the category of 
“silly” subnet masks to use on a broadcast medium. All three are entirely 
legitimate subnet masks, it’s just that they’re not useful for end host 
networks.

My reasoning behind checking the number of IP addresses in the subnet rather 
than the actual CIDR prefix length
is that I want the code to be IP version agnostic.  If we're talking IPv6, 
then /30 isn't going to be relevant.  I'm not
overly familiar with IPv6, but is it safe to say it has the same restriction 
that there must be more than 2 IPs available as the highest IP is the 
broadcast?

No, it actually isn’t safe to say that at all. IPv6, in what I consider to be 
“broken by design” but others defend vehemently, mandates that no subnet mask 
longer (i.e. numerically larger) than /64 is allowed. Some people grudgingly 
acknowledge that a /128 mask is acceptable for router loopback addresses and 
fewer people accept /126 for point to point links, but pretty much everyone 
agrees that anything between /64 to /126 should never be used.

Some people state that using a mask longer than /64 will break fundamental 
parts IPv6. I’m of the opinion that it was already broken and using masks 
longer than /64 merely reveals what’s broken rather than causing the breakage. 
Nevertheless, it’s too late to change those parts of IPv6 now. They’re set in 
stone and we’re stuck with the fact that all broadcast domains (i.e. layer 2 
segments, VLANs, end host subnets) must be /64 and nothing else. Masks shorter 
than /64 (i.e. numbers smaller than 64) are for route aggregation and belong in 
the world of routing protocols and IPAM databases.

One could probably make a convincing argument that as far as neutron 
subnet-create is concerned there’s no reason to even accept a subnet mask at 
all. Anything other than /64 is pretty much 99.999% guaranteed to be user error.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron]About creating vms without ip address

2014-01-22 Thread CARVER, PAUL
Yuriy Taraday wrote:

Fuel needs to manage nodes directly via DHCP and PXE and you can't do that 
with Neutron since you can't make its dnsmasq service quiet.

Can you elaborate on what you mean by this? You can turn of Neutron’s dnsmasq 
on a per network basis, correct? Do you mean something else by “make its 
dnsmasq service quiet”?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Selectively disabling certain built in iptables rules

2014-01-21 Thread CARVER, PAUL
Feel free to tell me this is a bad idea and scold me for even asking, but 
please help me figure out how to do it anyway. This is for a specific tenant in 
a specific lab that was built specifically for that one tenant to do some 
experimental work that requires VMs to route and other VMs to act as 
DHCP/PXEBoot servers.

I need to wrap a conditional around this line 
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L201
 and this line 
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L241
 for specific VM instances.

The criteria could be something like pattern matching on the instance name, or 
based on a specific flavor image type. I don't much care what the criteria is 
as long as it's something the tenant can control. What I'm hoping someone can 
provide me with is an example line of code or two with which I can examine some 
property of the image that has been created from within the specific file 
referenced above in order to wrap if statements around those two lines of code 
so that I can prevent them from adding those specific iptables rules in the 
specific cases where my tenant needs to either route or respond to DHCP.

Thanks

--
Paul Carver

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.

2014-01-16 Thread CARVER, PAUL

Alan Kavanagh wrote: 

I posted a query to Ironic which is related to this discussion. My thinking 
was I want to ensure the case you note here (1)  a tenant can not read 
another tenants disk.. the next (2) was where in Ironic you provision a 
baremetal server that has an onboard dish as part of the blade provisioned to 
a given tenant-A. then when tenant-A finishes his baremetal blade lease and 
that blade comes back into the pool and tenant-B comes along, I was asking 
what open source tools guarantee data destruction so that no ghost images  or 
file retrieval is possible?

That is an excellent point. I think the needs of Ironic may be different from 
Cinder. As a volume manager Cinder isn't actually putting the raw disk under 
the control of a tenant. If it can be assured that (as is the case with NetApp 
and other storage vendor hardware) that a fake all zeros is returned on a 
read-before-first-write of a chunk of disk space then that's sufficient to 
address the case of some curious ne'er-do-well allocating volumes purely for 
the purpose of reading them to see what's left on them.

But with bare metal the whole physical disk is at the mercy of the tenant so 
you're right that it must be ensured that the none of the previous tenant's 
bits are left lying around to be snooped on.

But I still think an *option* of wipe=none may be desirable because a cautious 
client might well take it into their own hands to wipe the disk before 
releasing it (and perhaps encrypt as well). In which case always doing an 
additional wipe is going to be more disk I/O for no real benefit.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.

2014-01-16 Thread CARVER, PAUL
Clint Byrum wrote:

Is that really a path worth going down, given that tenant-A could just
drop evil firmware in any number of places, and thus all tenants afterward
are owned anyway?

I think a change of subject line is in order for this topic (assuming it hasn't 
been discussed in sufficient depth already). I propose [Ironic] Evil Firmware 
but I didn't change it on this message in case folks interested in this thread 
aren't reading Ironic threads.

Ensuring clean firmware is definitely something Ironic needs to account for. 
Unless you're intending to say that multi-tenant bare metal is a dead end that 
shouldn't be done at all.

As long as anyone is considering Ironic and bare metal in general as a viable 
project and service it is critically important that people are focused on how 
to ensure that a server released by one tenant is clean before being provided 
to another tenant.

It doesn't even have to be evil firmware. Simply providing a tenant with a 
server where the previous tenant screwed up a firmware update or messed with 
BIOS settings or whatever is a problem. If you're going to lease bare metal out 
on a short term basis you've GOT to have some sort of QC to ensure that when 
the hardware is reused for another tenant it's as good as new.

If not, it will be all too common for a tenant to receive a bare metal server 
that's been screwed up by a previous tenant through incompetence as much as 
through maliciousness.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Evil Firmware

2014-01-16 Thread CARVER, PAUL
Clint Byrum wrote: 

Excerpts from Alan Kavanagh's message of 2014-01-15 19:11:03 -0800:
 Hi Paul
 
 I posted a query to Ironic which is related to this discussion. My thinking 
 was I want to ensure the case you note here (1)  a tenant can not read 
 another tenants disk.. the next (2) was where in Ironic you provision 
 a baremetal server that has an onboard dish as part of the blade 
 provisioned to a given tenant-A. then when tenant-A finishes his baremetal 
 blade lease and that blade comes back into the pool and tenant-B comes 
 along, I was asking what open source tools guarantee data destruction so 
 that no ghost images  or file retrieval is possible?
 

Is that really a path worth going down, given that tenant-A could just
drop evil firmware in any number of places, and thus all tenants afterward
are owned anyway?

Jumping back to an earlier part of the discussion, it occurs to me that this 
has broader implications. There's some discussion going on under the heading of 
Neutron with regard to PCI passthrough. I imagine it's under Neutron because of 
a desire to provide passthrough access to NICs, but given some of the activity 
around GPU based computing it seems like sooner or later someone is going to 
try to offer multi-tenant cloud servers with the ability to do GPU based 
computing if they haven't already.

I would say that if we're concerned about evil firmware (and I'm certainly not 
saying we shouldn't be concerned) then GPUs are definitely an viable target for 
deploying evil firmware and NICs might be as well. Furthermore, there may be 
cases where direct access to local disk is desirable for performance reasons 
even if the thing accessing the disk is a VM rather than a bare metal server.

Clint's warning about evil firmware should be seriously contemplated by anybody 
doing any work involving direct hardware access regardless of whether it's 
Ironic, Cinder, Neutron or anywhere else. 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Partially Shared Networks

2014-01-15 Thread CARVER, PAUL
Sorry for this not threading properly. I had set the Mailman config to filter 
on Neutron topic but it ended up filtering out everything so I only saw 
responses by looking at the archive. I removed the filter in Mailman and will 
have to filter locally on my end. But I don't have any of the original emails 
from the list to respond to in thread.

Anyway, Mathieu Rohon's response was interesting but not the same notion I was 
thinking of. I'm not talking about what various switch vendors call private 
VLAN, meaning a layer two segment where any to any connectivity is 
deliberately prohibited. That's a useful concept, just not the use case I had 
in mind.

Jay's point about dealing appropriately with overlapping subnets is also 
important in the general case but I had a simpler use case in mind. 
Specifically, I was assuming (although I may not have said so) that the 
networks would be configured by an admin to be available to multiple tenants. I 
hadn't thought of the notion of a tenant making one of their networks available 
to another tenant.

The particular use case I have in mind concerns networks that could technically 
be created as admin and marked as shared and thus have only whatever network 
namespace considerations that apply to shared networks. The desire to make them 
partially shared has more to do with the UI (either Horizon or API access) 
not showing them to tenants who are not on the approved list and not permitting 
tenants who are not on the list to attach instances to them.

This is basically like the door list at a club. If you're not on the list you 
can't get into the club. But if you're on the list, once you're inside the club 
it's not really any different from a less exclusive club other than the fact 
that everybody inside was on the list.


-- 
Paul Carver


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.

2014-01-15 Thread CARVER, PAUL
Chris Friesen [mailto:chris.frie...@windriver.com] wrote:

I read a proposal about using thinly-provisioned logical volumes as a 
way around the cost of wiping the disks, since they zero-fill on demand 
rather than incur the cost at deletion time.

I think it make a difference where the requirement for deletion is coming from.

If it's just to make sure that a tenant can't read another tenant's disk then 
what
you're talking about should work. It sounds similar (or perhaps identical to) 
how
NetApp (and I assume others) work by tracking whether the current client has
written to the volume and returning zeros rather than the actual contents of the
disk sector on a read that precedes the first write to that sector.

However, in that case the previous client's bits are still on the disk. If they 
were
unencrypted then they're still available if someone somehow got ahold of the
physical disk out of the storage array.

That may not be acceptable depending on the tenant's security requirements.

Though one may reasonably ask why they were writing unencrypted bits to
a disk that they didn't have physical control over.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Partially Shared Networks

2014-01-10 Thread CARVER, PAUL
If anyone is giving any thought to networks that are available to multiple 
tenants (controlled by a configurable list of tenants) but not visible to all 
tenants I'd like to hear about it.

I'm especially thinking of scenarios where specific networks exist outside of 
OpenStack and have specific purposes and rules for who can deploy servers on 
them. We'd like to enable the use of OpenStack to deploy to these sorts of 
networks but we can't do that with the current shared or not shared binary 
choice.

--
Paul Carver

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev