Re: [Openstack-operators] [openstack-dev] Device {UUID}c not defined on plugin

2015-06-16 Thread Alvise Dorigo

Hi,
I forgot to attach some relevant config files:

/etc/neutron/plugins/ml2/ml2_conf.ini :

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

enable_security_group = True
[ovs]
local_ip = 192.168.61.106
tunnel_type = gre
enable_tunneling = True

/etc/neutron/neutron.conf :

[DEFAULT]
nova_ca_certificates_file = /etc/grid-security/certificates/INFN-CA-2006.pem
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_hosts = 192.168.60.105:5672,192.168.60.106:5672
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = https://cloud-areapd.pd.infn.it:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = 1b2caeedb3e2497b935723dc6e142ec9
nova_admin_password = X
nova_admin_auth_url = https://cloud-areapd.pd.infn.it:35357/v2.0
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
verbose = True
debug = False
rabbit_ha_queues = True
dhcp_agents_per_network = 2
[quotas]
[agent]
[keystone_authtoken]
auth_uri = https://cloud-areapd.pd.infn.it:35357/v2.0
auth_url = https://cloud-areapd.pd.infn.it:35357/v2.0
auth_host = cloud-areapd.pd.infn.it
auth_protocol = https
auth_port = 35357
admin_tenant_name = services
admin_user = neutron
admin_password = X
cafile = /etc/grid-security/certificates/INFN-CA-2006.pem
[database]
connection = mysql://neutron_prod:XX@192.168.60.10/neutron_prod
[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default



And here

http://pastebin.com/P977162t

the output of ovs-vsctl show.

Alvise



On 16/06/2015 15:30, Alvise Dorigo wrote:

Hi
after a migration of Havana to IceHouse (using controller and network 
services/agents on the same physical node, and using OVS/GRE) we 
started facing some network-related problems (the internal tag of the 
element shown by ovs-vsctl show was set to 4095, which is wrong 
AFAIK). At the beginning the problems could be solved by just 
restarting the openvswitch related agents (and openvswitch itself), or 
changing the tag by hand; but now the networking definitely stopped 
working.


When we add a new router interface connected to a tenant lan, it is 
created in DOWN state. The in the openvswitch-agent.log we see this 
errore message:


2015-06-16 15:07:43.275 40708 WARNING 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device 
ba295e45-9a73-48c1-8864-a59edd5855dc not defined on plugin


and nothing more.

Any suggestion ?

thanks,

Alvise


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-16 Thread Jay Pipes
Adding -dev because of the reference to the Neutron Get me a network 
spec. Also adding [nova] and [neutron] subject markers.


Comments inline, Kris.

On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:

During the Openstack summit this week I got to talk to a number of other
operators of large Openstack deployments about how they do networking.
  I was happy, surprised even, to find that a number of us are using a
similar type of networking strategy.  That we have similar challenges
around networking and are solving it in our own but very similar way.
  It is always nice to see that other people are doing the same things
as you or see the same issues as you are and that you are not crazy.
So in that vein, I wanted to reach out to the rest of the Ops Community
and ask one pretty simple question.

Would it be accurate to say that most of your end users want almost
nothing to do with the network?


That was my experience at ATT, yes. The vast majority of end users 
could not care less about networking, as long as the connectivity was 
reliable, performed well, and they could connect to the Internet (and 
have others connect from the Internet to their VMs) when needed.



In my experience what the majority of them (both internal and external)
want is to consume from Openstack a compute resource, a property of
which is it that resource has an IP address.  They, at most, care about
which network they are on.  Where a network is usually an arbitrary
definition around a set of real networks, that are constrained to a
location, in which the company has attached some sort of policy.  For
example, I want to be in the production network vs's the xyz lab
network, vs's the backup network, vs's the corp network.  I would say
for Godaddy, 99% of our use cases would be defined as: I want a compute
resource in the production network zone, or I want a compute resource in
this other network zone.  The end user only cares that the IP the vm
receives works in that zone, outside of that they don't care any other
property of that IP.  They do not care what subnet it is in, what vlan
it is on, what switch it is attached to, what router its attached to, or
how data flows in/out of that network.  It just needs to work. We have
also found that by giving the users a floating ip address that can be
moved between vm's (but still constrained within a network zone) we
can solve almost all of our users asks.  Typically, the internal need
for a floating ip is when a compute resource needs to talk to another
protected internal or external resource. Where it is painful (read:
slow) to have the acl's on that protected resource updated. The external
need is from our hosting customers who have a domain name (or many) tied
to an IP address and changing IP's/DNS is particularly painful.


This is precisely my experience as well.


Since the vast majority of our end users don't care about any of the
technical network stuff, we spend a large amount of time/effort in
abstracting or hiding the technical stuff from the users view. Which has
lead to a number of patches that we carry on both nova and neutron (and
are available on our public github).


You may be interested to learn about the Get Me a Network 
specification that was discussed in a session at the summit. I had 
requested some time at the summit to discuss this exact use case -- 
where users of Nova actually didn't care much at all about network 
constructs and just wanted to see Nova exhibit similar behaviour as the 
nova-network behaviour of admin sets up a bunch of unassigned networks 
and the first time a tenant launches a VM, she just gets an available 
network and everything is just done for her.


The spec is here:

https://review.openstack.org/#/c/184857/

 At the same time we also have a

*very* small subset of (internal) users who are at the exact opposite
end of the scale.  They care very much about the network details,
possibly all the way down to that they want to boot a vm to a specific
HV, with a specific IP address on a specific network segment.  The
difference however, is that these users are completely aware of the
topology of the network and know which HV's map to which network
segments and are essentially trying to make a very specific ask for
scheduling.


Agreed, at Mirantis (and occasionally at ATT), we do get some customers 
(mostly telcos, of course) that would like total control over all things 
networking.


Nothing wrong with this, of course. But the point of the above spec is 
to allow normal users to not have to think or know about all the 
advanced networking stuffs if they don't need it. The Neutron API should 
be able to handle both sets of users equally well.


Best,
-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [puppet] OpenStack Puppet Modules Usage Questions

2015-06-16 Thread Jonathan Proulx
On Mon, Jun 15, 2015 at 7:46 PM, Richard Raseley rich...@raseley.com wrote:
 As part of wrapping up the few remaining 'loose ends' in moving the Puppet
 modules under the big tent, we are pressing forward with deprecating the
 previously used 'puppet-openst...@puppetlabs.com' mailing list in favor of
 both the openstack-dev and openstack-operators lists with the '[puppet]'
 tag.

 Usage of the openstack-dev list seems pretty straight forward, but we wanted
 to confirm with the broader community that this list (openstack-operators)
 was the appropriate venue for Puppet and OpenStack related usage questions.

 Any objections to this model?

I think makes perfect sense especially as other ops related working
groups are starting to use this list in a similar [tagged] way.

-Jon

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Upcoming changes to server version numbering for Liberty

2015-06-16 Thread Doug Hellmann
As we approach the Liberty-1 milestone, we are applying the changes to
server versions discussed at the summit and on the mailing list [1][2].

The tl;dr version is that we are switching from date-based versioning
to SemVer (semantic versioning [3]). This will result in version
numbers appearing to go backwards. Distros will correct this by
incrementing the epoch field of the version number, which is
normally invisible. Anyone doing CI/CD from packages they build
themselves may need to take similar action before/after this change
to ensure that upgrades occur smoothly.

If you want to monitor the patch for your favorite project to know when
the change will happen, they are all set up in a gerrit topic
semver-releases [4] to make them easy to find.

Please reply to the thread on the -dev mailing list with any questions
to keep the discussion in one place.

Doug

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-May/065211.html
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2015-June/thread.html#65278
[3] http://docs.openstack.org/developer/pbr/semver.html
[4] https://review.openstack.org/#/q/topic:semver-releases,n,z

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Could not get IP to instance through neutron

2015-06-16 Thread pra devOPS
All:

I have installed and configured neutron (using the  official docs, my
openstack version is icehouse and the OS is centOS7.


After creating the neutron network and started the instance the instance
goes to the ERROR state and says no Valid host found


/var/log/neutron/openvswitch-agent.log Shows below:

Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf',
'ovs-ofctl', 'dump-flows', 'br-int', 'table=22']
Exit code: 1
Stdout: ''
Stderr: 'ovs-ofctl: br-int is not a bridge or a socket\n'
2015-06-16 15:03:11.458 14566 ERROR neutron.agent.linux.ovs_lib [-] Unable
to execute ['ovs-ofctl', 'dump-flows', 'br-int', 'table=22']. Exception:
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf',
'ovs-ofctl', 'dump-flows', 'br-int', 'table=22']
Exit code: 1
Stdout: ''
Stderr: 'ovs-ofctl: br-int is not a bridge or a socket\n'
2015-06-16 15:21:00.204 14566 INFO neutron.agent.securitygroups_rpc [-]
Preparing filters for devices set([u'gw-cea73d4a-1c'])
2015-06-16 15:21:00.487 14566 WARNING
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device
gw-cea73d4a-1c not defined on plugin


The Nova Compute log shows below:


REQ: curl -i http://10.0.0.125:35357/v2.0/tokens -X POST -H Content-Type:
application/json -H Accept: application/json -H User-Agent:
python-neutronclient -d '{auth: {tenantName: service,
passwordCredentials: {username: neutron, password: REDACTED}}}'
 http_log_req
/usr/lib/python2.7/site-packages/neutronclient/common/utils.py:173
2015-06-16 16:08:22.313 23468 DEBUG neutronclient.client [-]
RESP:{'status': '401', 'content-length': '114', 'vary': 'X-Auth-Token',
'date': 'Tue, 16 Jun 2015 23:08:22 GMT', 'content-type':
'application/json', 'www-authenticate': 'Keystone uri=
http://10.10.202.125:35357;'} {error: {message: The request you have
made requires authentication., code: 401, title: Unauthorized}}
 http_log_resp
/usr/lib/python2.7/site-packages/neutronclient/common/utils.py:179
2015-06-16 16:08:22.314 23468 ERROR nova.compute.manager [-] [instance:
9667a1d6-9883-429d-8b85-6f0ad0a8d887] An error occurred while refreshing
the network cache.
2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
9667a1d6-9883-429d-8b85-6f0ad0a8d887] Traceback (most recent call last):
2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
9667a1d6-9883-429d-8b85-6f0ad0a8d887]   File
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 4938, in
_heal_instance_info_cache
2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
9667a1d6-9883-429d-8b85-6f0ad0a8d887]
self._get_instance_nw_info(context, instance, use_slave=True)
2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
9667a1d6-9883-429d-8b85-6f0ad0a8d887]   File
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1163, in
_get_instance_nw_info
2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
9667a1d6-9883-429d-8b85-6f0ad0a8d887] instance)
2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
9667a1d6-9883-429d-8b85-6f0ad0a8d887]   File
/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py, line 482,
in get_instance_nw_info
2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
9667a1d6-9883-429d-8b85-6f0ad0a8d887] port_ids)
2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
9667a1d6-9883-429d-8b85-6f0ad0a8d887]   File
/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py, line 496,
in _get_instance_nw_info
2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
9667a1d6-9883-429d-8b85-6f0ad0a8d887] port_ids)
2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
9667a1d6-9883-429d-8b85-6f0ad0a8d887]   File
/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py, line
1137, in _build_network_info_model
2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
9667a1d6-9883-429d-8b85-6f0ad0a8d887] data =
client.list_ports(**search_opts)
2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
9667a1d6-9883-429d-8b85-6f0ad0a8d887]   File
/usr/lib/python2.7/site-packages/nova/network/neutronv2/__init__.py, line
81, in wrapper
2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
9667a1d6-9883-429d-8b85-6f0ad0a8d887] ret = obj(*args, **kwargs)



Below are my setting:


== Nova networks ==
+--++--+
| ID   | Label  | Cidr |
+--++--+
| 7d554063-8b7e-40a9-af02-738ca2b480a4 | net-ext| -|
| f14cecdf-95de-424a-871e-f278e61a008b | Int-Subnet | -|
+--++--+
== Nova instance flavors ==
++---+---+--+---+--+---+-+---+
| ID | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs |
RXTX_Factor | Is_Public |

Re: [Openstack-operators] [openstack-dev] [nova] [neutron] Re: How do your end users use networking?

2015-06-16 Thread Sam Morrison

 On 17 Jun 2015, at 10:56 am, Armando M. arma...@gmail.com wrote:
 
 
 
 On 16 June 2015 at 17:31, Sam Morrison sorri...@gmail.com 
 mailto:sorri...@gmail.com wrote:
 We at NeCTAR are starting the transition to neutron from nova-net and neutron 
 almost does what we want.
 
 We have 10 “public networks and 10 “service networks and depending on which 
 compute node you land on you get attached to one of them.
 
 In neutron speak we have multiple shared externally routed provider networks. 
 We don’t have any tenant networks or any other fancy stuff yet.
 How I’ve currently got this set up is by creating 10 networks and subsequent 
 subnets eg. public-1, public-2, public-3 … and service-1, service-2, 
 service-3 and so on.
 
 In nova we have made a slight change in allocate for instance [1] whereby the 
 compute node has a designated hardcoded network_ids for the public and 
 service network it is physically attached to.
 We have also made changes in the nova API so users can’t select a network and 
 the neutron endpoint is not registered in keystone.
 
 That all works fine but ideally I want a user to be able to choose if they 
 want a public and or service network. We can’t let them as we have 10 public 
 networks, we almost need something in neutron like a network group” or 
 something that allows a user to select “public” and it allocates them a port 
 in one of the underlying public networks.
 
 I tried going down the route of having 1 public and 1 service network in 
 neutron then creating 10 subnets under each. That works until you get to 
 things like dhcp-agent and metadata agent although this looks like it could 
 work with a few minor changes. Basically I need a dhcp-agent to be spun up 
 per subnet and ensure they are spun up in the right place.
 
 I’m not sure what the correct way of doing this. What are other people doing 
 in the interim until this kind of use case can be done in Neutron?
 
 Would something like [1] be adequate to address your use case? If not, I'd 
 suggest you to file an RFE bug (more details in [2]), so that we can keep the 
 discussion focused on this specific case.
 
 HTH
 Armando
 
 [1] https://blueprints.launchpad.net/neutron/+spec/rbac-networks 
 https://blueprints.launchpad.net/neutron/+spec/rbac-networks
That’s not applicable in this case. We don’t care about what tenants are when 
in this case.

 [2] 
 https://github.com/openstack/neutron/blob/master/doc/source/policies/blueprints.rst#neutron-request-for-feature-enhancements
  
 https://github.com/openstack/neutron/blob/master/doc/source/policies/blueprints.rst#neutron-request-for-feature-enhancements
The bug Kris mentioned outlines all I want too I think.

Sam


 
  
 
 Cheers,
 Sam
 
 [1] 
 https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c4635afb12
  
 https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c4635afb12
 
 
 
  On 17 Jun 2015, at 12:20 am, Jay Pipes jaypi...@gmail.com 
  mailto:jaypi...@gmail.com wrote:
 
  Adding -dev because of the reference to the Neutron Get me a network 
  spec. Also adding [nova] and [neutron] subject markers.
 
  Comments inline, Kris.
 
  On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
  During the Openstack summit this week I got to talk to a number of other
  operators of large Openstack deployments about how they do networking.
   I was happy, surprised even, to find that a number of us are using a
  similar type of networking strategy.  That we have similar challenges
  around networking and are solving it in our own but very similar way.
   It is always nice to see that other people are doing the same things
  as you or see the same issues as you are and that you are not crazy.
  So in that vein, I wanted to reach out to the rest of the Ops Community
  and ask one pretty simple question.
 
  Would it be accurate to say that most of your end users want almost
  nothing to do with the network?
 
  That was my experience at ATT, yes. The vast majority of end users could 
  not care less about networking, as long as the connectivity was reliable, 
  performed well, and they could connect to the Internet (and have others 
  connect from the Internet to their VMs) when needed.
 
  In my experience what the majority of them (both internal and external)
  want is to consume from Openstack a compute resource, a property of
  which is it that resource has an IP address.  They, at most, care about
  which network they are on.  Where a network is usually an arbitrary
  definition around a set of real networks, that are constrained to a
  location, in which the company has attached some sort of policy.  For
  example, I want to be in the production network vs's the xyz lab
  network, vs's the backup network, vs's the corp network.  I would say
  for Godaddy, 99% of our use cases would be defined as: I want a compute
  resource in the production network zone, or I want a compute resource in
  this other network zone.  The end user only cares 

Re: [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-16 Thread Kris G. Lindgren
We are doing pretty much the same thing - but in a slightly different way.
 We extended the nova scheduler to help choose networks (IE. don't put
vm's on a network/host that doesn't have any available IP address). Then,
we add into the host-aggregate that each HV is attached to a network
metadata item which maps to the names of the neutron networks that host
supports.  This basically creates the mapping of which host supports what
networks, so we can correctly filter hosts out during scheduling. We do
allow people to choose a network if they wish and we do have the neutron
end-point exposed. However, by default if they do not supply a boot
command with a network, we will filter the networks down and choose one
for them.  That way they never hit [1].  This also works well for us,
because the default UI that we provide our end-users is not horizon.

We currently only support one network per HV via this configuration, but
we would like to be able to expose a network type or group via neutron
in the future.  

I believe what you described below is also another way of phrasing the ask
that we had in [2].  That you want to define multiple top level networks
in neutron: 'public' and 'service'.  That is made up by multiple desperate
L2 networks: 'public-1', 'public2,' ect which are independently
constrained to a specific set of hosts/switches/datacenter.

We have talked about working around this under our configuration one of
two ways.  First, is to use availability zones to provide the separation
between: 'public' and 'service', or in our case: 'prod', 'pki','internal',
ect, ect.  This would work well for our current use case (one type of
network per HV), but would most likely be wasteful in yours.  You could
probably make the change that allows a HV to exist in more than one
availability zone, which would allow you to specify the same hypervisors
for the public and service AZ's and thus not be wasteful.  Second, is
by creating additional flavors that have an network_group attribute and do
some extra filtering on that.  We had some other other ideas - but had a
number of opens how to get them fully implemented.


[1] 
https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c463
5afb12#diff-36d2d42967c808d55e7a129fe0200734L328
[2] https://bugs.launchpad.net/neutron/+bug/1458890



 
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.





On 6/16/15, 6:31 PM, Sam Morrison sorri...@gmail.com wrote:

We at NeCTAR are starting the transition to neutron from nova-net and
neutron almost does what we want.

We have 10 ³public networks and 10 ³service networks and depending on
which compute node you land on you get attached to one of them.

In neutron speak we have multiple shared externally routed provider
networks. We don¹t have any tenant networks or any other fancy stuff yet.
How I¹ve currently got this set up is by creating 10 networks and
subsequent subnets eg. public-1, public-2, public-3 Š and service-1,
service-2, service-3 and so on.

In nova we have made a slight change in allocate for instance [1] whereby
the compute node has a designated hardcoded network_ids for the public
and service network it is physically attached to.
We have also made changes in the nova API so users can¹t select a network
and the neutron endpoint is not registered in keystone.

That all works fine but ideally I want a user to be able to choose if
they want a public and or service network. We can¹t let them as we have
10 public networks, we almost need something in neutron like a network
group² or something that allows a user to select ³public² and it
allocates them a port in one of the underlying public networks.

I tried going down the route of having 1 public and 1 service network in
neutron then creating 10 subnets under each. That works until you get to
things like dhcp-agent and metadata agent although this looks like it
could work with a few minor changes. Basically I need a dhcp-agent to be
spun up per subnet and ensure they are spun up in the right place.

I¹m not sure what the correct way of doing this. What are other people
doing in the interim until this kind of use case can be done in Neutron?

Cheers,
Sam
 
[1] 
https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c46
35afb12



 On 17 Jun 2015, at 12:20 am, Jay Pipes jaypi...@gmail.com wrote:
 
 Adding -dev because of the reference to the Neutron Get me a network
spec. Also adding [nova] and [neutron] subject markers.
 
 Comments inline, Kris.
 
 On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
 During the Openstack summit this week I got to talk to a number of
other
 operators of large Openstack deployments about how they do networking.
  I was happy, surprised even, to find that a number of us are using a
 similar type of networking strategy.  That we have similar challenges
 around networking and are solving it in our own but very similar way.
  It is always nice to see that other 

Re: [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-16 Thread Sam Morrison
We at NeCTAR are starting the transition to neutron from nova-net and neutron 
almost does what we want.

We have 10 “public networks and 10 “service networks and depending on which 
compute node you land on you get attached to one of them.

In neutron speak we have multiple shared externally routed provider networks. 
We don’t have any tenant networks or any other fancy stuff yet.
How I’ve currently got this set up is by creating 10 networks and subsequent 
subnets eg. public-1, public-2, public-3 … and service-1, service-2, service-3 
and so on.

In nova we have made a slight change in allocate for instance [1] whereby the 
compute node has a designated hardcoded network_ids for the public and service 
network it is physically attached to.
We have also made changes in the nova API so users can’t select a network and 
the neutron endpoint is not registered in keystone.

That all works fine but ideally I want a user to be able to choose if they want 
a public and or service network. We can’t let them as we have 10 public 
networks, we almost need something in neutron like a network group” or 
something that allows a user to select “public” and it allocates them a port in 
one of the underlying public networks.

I tried going down the route of having 1 public and 1 service network in 
neutron then creating 10 subnets under each. That works until you get to things 
like dhcp-agent and metadata agent although this looks like it could work with 
a few minor changes. Basically I need a dhcp-agent to be spun up per subnet and 
ensure they are spun up in the right place.

I’m not sure what the correct way of doing this. What are other people doing in 
the interim until this kind of use case can be done in Neutron?

Cheers,
Sam
 
[1] 
https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c4635afb12



 On 17 Jun 2015, at 12:20 am, Jay Pipes jaypi...@gmail.com wrote:
 
 Adding -dev because of the reference to the Neutron Get me a network spec. 
 Also adding [nova] and [neutron] subject markers.
 
 Comments inline, Kris.
 
 On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
 During the Openstack summit this week I got to talk to a number of other
 operators of large Openstack deployments about how they do networking.
  I was happy, surprised even, to find that a number of us are using a
 similar type of networking strategy.  That we have similar challenges
 around networking and are solving it in our own but very similar way.
  It is always nice to see that other people are doing the same things
 as you or see the same issues as you are and that you are not crazy.
 So in that vein, I wanted to reach out to the rest of the Ops Community
 and ask one pretty simple question.
 
 Would it be accurate to say that most of your end users want almost
 nothing to do with the network?
 
 That was my experience at ATT, yes. The vast majority of end users could not 
 care less about networking, as long as the connectivity was reliable, 
 performed well, and they could connect to the Internet (and have others 
 connect from the Internet to their VMs) when needed.
 
 In my experience what the majority of them (both internal and external)
 want is to consume from Openstack a compute resource, a property of
 which is it that resource has an IP address.  They, at most, care about
 which network they are on.  Where a network is usually an arbitrary
 definition around a set of real networks, that are constrained to a
 location, in which the company has attached some sort of policy.  For
 example, I want to be in the production network vs's the xyz lab
 network, vs's the backup network, vs's the corp network.  I would say
 for Godaddy, 99% of our use cases would be defined as: I want a compute
 resource in the production network zone, or I want a compute resource in
 this other network zone.  The end user only cares that the IP the vm
 receives works in that zone, outside of that they don't care any other
 property of that IP.  They do not care what subnet it is in, what vlan
 it is on, what switch it is attached to, what router its attached to, or
 how data flows in/out of that network.  It just needs to work. We have
 also found that by giving the users a floating ip address that can be
 moved between vm's (but still constrained within a network zone) we
 can solve almost all of our users asks.  Typically, the internal need
 for a floating ip is when a compute resource needs to talk to another
 protected internal or external resource. Where it is painful (read:
 slow) to have the acl's on that protected resource updated. The external
 need is from our hosting customers who have a domain name (or many) tied
 to an IP address and changing IP's/DNS is particularly painful.
 
 This is precisely my experience as well.
 
 Since the vast majority of our end users don't care about any of the
 technical network stuff, we spend a large amount of time/effort in
 abstracting or hiding the technical stuff from the users view. Which 

Re: [Openstack-operators] Neutron-openvswitch-agent service fails to start

2015-06-16 Thread pra devOPS
Can somebody suggest me on this,

I have followed the document as is but could get the  neutron-openvswitch-agent
running.

when I run the service

[root@myopenstack ~]# service neutron-openvswitch-agent start


I get below Error.


usage: neutron-openvswitch-agent [-h] [--config-dir DIR] [--config-file
PATH]
 [--debug] [--log-config-append PATH]
 [--log-date-format DATE_FORMAT]
 [--log-dir LOG_DIR] [--log-file PATH]
 [--log-format FORMAT] [--nodebug]
 [--nouse-syslog] [--noverbose]
 [--state_path STATE_PATH]
 [--syslog-log-facility SYSLOG_LOG_FACILITY]
 [--use-syslog] [--verbose] [--version]
neutron-openvswitch-agent: error: unrecognized arguments: start
[root@myopenstack ~]#

thanks,Dev

On Mon, Jun 15, 2015 at 12:14 PM, pra devOPS siv.dev...@gmail.com wrote:

 All:

 I have installed the neutron plugin and trying to start


  neutron-openvswitch-agent by giving the following command.

 [root@myopenstack ~]# service neutron-openvswitch-agent start


 I get below Error.


 usage: neutron-openvswitch-agent [-h] [--config-dir DIR] [--config-file
 PATH]
  [--debug] [--log-config-append PATH]
  [--log-date-format DATE_FORMAT]
  [--log-dir LOG_DIR] [--log-file PATH]
  [--log-format FORMAT] [--nodebug]
  [--nouse-syslog] [--noverbose]
  [--state_path STATE_PATH]
  [--syslog-log-facility
 SYSLOG_LOG_FACILITY]
  [--use-syslog] [--verbose] [--version]
 neutron-openvswitch-agent: error: unrecognized arguments: start
 [root@myopenstack ~]#


 Any help would be highly appreciated

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [ops][tags][packaging] ops:packaging tag - a little common sense, please

2015-06-16 Thread Thomas Goirand
Thanks Jay for this.

I basically agree with all you wrote.

On 06/10/2015 07:51 PM, Jay Pipes wrote:
 I don't believe the Ops Tags team should be curating the packaging tags
 -- the packaging community should do that, and do that under the main
 openstack/governance repository.
 
 Packagers, I would love it if you would curate a set of tags that looks
 kind of like this:
 
  - packaged:centos:kilo
  - packaged:ubuntu:liberty
  - packaged:sles:juno

As you wrote, the list will be *very* outdated *very* fast. I don't see
the point of having such tagging scheme, when all is available in a
central place [1] already.

I'm not happy either with the fact that there would be only a single
apt definition for the quality, when Debian  Ubuntu packages are
different. Especially when I take a great care of reducing the number of
bugs within the Debian tracker [2]. I've raised the issue multiple times
on the blueprint, but I basically got ignored.

If we want this blueprint to get through, please take into account
remarks that reviewers are making.

Cheers,

Thomas Goirand (zigo)

[1]
https://qa.debian.org/developer.php?login=openstack-de...@lists.alioth.debian.org

[2]
https://bugs.debian.org/cgi-bin/pkgreport.cgi?which=maintdata=openstack-devel%40lists.alioth.debian.orgarchive=noraw=yesbug-rev=yespend-exc=fixedpend-exc=done


Note on this URL: Yes, only 6 buts reported currently opened in Debian,
out of 242 packages. And with 5 of the bugs needing upstream actions
(getting out of suds, pyeclib needing a new release), and one pending
Debian FTP masters approval of the package. It's like zero actionable
bugs to me!!! Please do submit a bug, and I'll do my best to close it in
a record time...


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators