Re: [Openstack-operators] Could not get IP to instance through neutron

2015-06-17 Thread pra devOPS
Hi Eren,

I created it using ovs-vsctl,

I have recreated the entire networks and then still could not see the br-x
and br-int  )when I do ifconfig.

when i do ifconifg -a i do see them.

below is what I see

 ovs-vsctl show
89744d4a-f9e5-4fe1-b35e-81af9aaa0bac
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port tap1f2ed78e-f1
tag: 4095
Interface tap1f2ed78e-f1
type: internal
Port gw-cea73d4a-1c
tag: 4095
Interface gw-cea73d4a-1c
type: internal
Port tapbedd14b9-c1
tag: 4095
Interface tapbedd14b9-c1
type: internal
Port qr-58d11b25-de
tag: 4095
Interface qr-58d11b25-de
type: internal
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port vlan410
Interface vlan410
Port qg-c57b4514-3e
Interface qg-c57b4514-3e
type: internal
ovs_version: 2.1.3


I seee the br-int and br-ex in the ifconfig -a the state is down ( no ip
for them)

the l3-agent log says below:

2015-06-17 14:27:31.966 24566 ERROR neutron.agent.l3_agent
[req-97b56678-b1e8-4de7-a838-86ebbe240ae8 None] Failed synchronizing
routers due to RPC error
2015-06-17 14:27:31.966 24566 TRACE neutron.agent.l3_agent Traceback (most
recent call last):
2015-06-17 14:27:31.966 24566 TRACE neutron.agent.l3_agent   File
/usr/lib/python2.7/site-packages/neutron/agent/l3_agent.py, line 865, in
_sync_routers_task
2015-06-17 14:27:31.966 24566 TRACE neutron.agent.l3_agent context,
router_ids)
2015-06-17 14:27:31.966 24566 TRACE neutron.agent.l3_agent   File
/usr/lib/python2.7/site-packages/neutron/agent/l3_agent.py, line 76, in
get_routers
2015-06-17 14:27:31.966 24566 TRACE neutron.agent.l3_agent
topic=self.topic)
2015-06-17 14:27:31.966 24566 TRACE neutron.agent.l3_agent   File
/usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/proxy.py,
line 129, in call
2015-06-17 14:27:31.966 24566 TRACE neutron.agent.l3_agent exc.info,
real_topic, msg.get('method'))
2015-06-17 14:27:31.966 24566 TRACE neutron.agent.l3_agent Timeout: Timeout
while waiting on RPC response - topic: q-l3-plugin, RPC method:
sync_routers info: unknown
2015-06-17 14:27:31.966 24566 TRACE neutron.agent.l3_agent
2015-06-17 14:27:31.967 24566 WARNING neutron.openstack.common.loopingcall
[req-97b56678-b1e8-4de7-a838-86ebbe240ae8 None] task run outlasted interval
by 58.01159 sec
2015-06-17 14:27:31.968 24566 WARNING neutron.openstack.common.loopingcall
[req-97b56678-b1e8-4de7-a838-86ebbe240ae8 None] task run outlasted interval
by 20.012439 sec
2015-06-17 14:28:08.075 24566 INFO neutron.openstack.common.service
[req-97b56678-b1e8-4de7-a838-86ebbe240ae8 None] Caught SIGTERM, exiting
2015-06-17 14:28:08.833 25336 INFO neutron.common.config [-] Logging
enabled!
2015-06-17 14:28:08.914 25336 INFO neutron.openstack.common.rpc.impl_qpid
[req-7208d75e-f246-4932-9bf3-208e70c4c38f None] Connected to AMQP server on
10.0.0.125:5672
2015-06-17 14:28:08.916 25336 INFO neutron.openstack.common.rpc.impl_qpid
[req-7208d75e-f246-4932-9bf3-208e70c4c38f None] Connected to AMQP server on
10.0.0.125:5672
2015-06-17 14:28:08.939 25336 INFO neutron.openstack.common.rpc.impl_qpid
[req-7208d75e-f246-4932-9bf3-208e70c4c38f None] Connected to AMQP server on
10.0.0.125:5672
2015-06-17 14:28:08.943 25336 INFO neutron.agent.l3_agent
[req-7208d75e-f246-4932-9bf3-208e70c4c38f None] L3 agent started
2015-06-17 14:28:14.669 25336 INFO neutron.agent.linux.interface
[req-7208d75e-f246-4932-9bf3-208e70c4c38f None] Device qg-c57b4514-3e
already exists
2015-06-17 14:28:15.374 25336 WARNING neutron.openstack.common.loopingcall
[req-7208d75e-f246-4932-9bf3-208e70c4c38f None] task run outlasted interval
by 0.523636 sec


No error in neutron log:


Nova log says


2015-06-17 14:47:03.014 25349 ERROR nova.compute.manager
[req-6b58be6c-2d97-4f91-b50c-5f64634f7e22 4ca4e46cf5564ce5b158bf2eef161618
6341291002db49b8ae6baa6e6c6455f3] [instance:
e7def19a-5d8e-439b-b68e-0d883c66f96a] Error: Unexpected
vif_type=binding_failed


can somebody suggest ?


Thanks,
Dev










On Wed, Jun 17, 2015 at 2:56 AM, Eren Türkay er...@skyatlas.com wrote:

 On 17-06-2015 02:08, pra devOPS wrote:
  Stderr: 'ovs-ofctl: br-int is not a bridge or a socket\n'

 I think this error is clear. br-int seems to be created in a wrong way.
 How did
 you create this interface?

 --
 Eren Türkay, System Administrator
 https://skyatlas.com/ | +90 850 885 0357

 Yildiz Teknik Universitesi Davutpasa Kampusu
 Teknopark Bolgesi, D2 Blok No:107
 Esenler, Istanbul Pk.34220


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Could not get IP to instance through neutron

2015-06-17 Thread pra devOPS
Eren,

All the config files are placed here:

http://paste.openstack.org/show/299701/

On Tue, Jun 16, 2015 at 4:08 PM, pra devOPS siv.dev...@gmail.com wrote:

 All:

 I have installed and configured neutron (using the  official docs, my
 openstack version is icehouse and the OS is centOS7.


 After creating the neutron network and started the instance the instance
 goes to the ERROR state and says no Valid host found


 /var/log/neutron/openvswitch-agent.log Shows below:

 Command: ['sudo', 'neutron-rootwrap',
 '/etc/neutron/rootwrap.conf', 'ovs-ofctl', 'dump-flows', 'br-int',
 'table=22']
 Exit code: 1
 Stdout: ''
 Stderr: 'ovs-ofctl: br-int is not a bridge or a socket\n'
 2015-06-16 15:03:11.458 14566 ERROR neutron.agent.linux.ovs_lib [-] Unable
 to execute ['ovs-ofctl', 'dump-flows', 'br-int', 'table=22']. Exception:
 Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf',
 'ovs-ofctl', 'dump-flows', 'br-int', 'table=22']
 Exit code: 1
 Stdout: ''
 Stderr: 'ovs-ofctl: br-int is not a bridge or a socket\n'
 2015-06-16 15:21:00.204 14566 INFO neutron.agent.securitygroups_rpc [-]
 Preparing filters for devices set([u'gw-cea73d4a-1c'])
 2015-06-16 15:21:00.487 14566 WARNING
 neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device
 gw-cea73d4a-1c not defined on plugin


 The Nova Compute log shows below:


 REQ: curl -i http://10.0.0.125:35357/v2.0/tokens -X POST -H
 Content-Type: application/json -H Accept: application/json -H
 User-Agent: python-neutronclient -d '{auth: {tenantName: service,
 passwordCredentials: {username: neutron, password: REDACTED}}}'
  http_log_req
 /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:173
 2015-06-16 16:08:22.313 23468 DEBUG neutronclient.client [-]
 RESP:{'status': '401', 'content-length': '114', 'vary': 'X-Auth-Token',
 'date': 'Tue, 16 Jun 2015 23:08:22 GMT', 'content-type':
 'application/json', 'www-authenticate': 'Keystone uri=
 http://10.10.202.125:35357;'} {error: {message: The request you have
 made requires authentication., code: 401, title: Unauthorized}}
  http_log_resp
 /usr/lib/python2.7/site-packages/neutronclient/common/utils.py:179
 2015-06-16 16:08:22.314 23468 ERROR nova.compute.manager [-] [instance:
 9667a1d6-9883-429d-8b85-6f0ad0a8d887] An error occurred while refreshing
 the network cache.
 2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
 9667a1d6-9883-429d-8b85-6f0ad0a8d887] Traceback (most recent call last):
 2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
 9667a1d6-9883-429d-8b85-6f0ad0a8d887]   File
 /usr/lib/python2.7/site-packages/nova/compute/manager.py, line 4938, in
 _heal_instance_info_cache
 2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
 9667a1d6-9883-429d-8b85-6f0ad0a8d887]
 self._get_instance_nw_info(context, instance, use_slave=True)
 2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
 9667a1d6-9883-429d-8b85-6f0ad0a8d887]   File
 /usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1163, in
 _get_instance_nw_info
 2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
 9667a1d6-9883-429d-8b85-6f0ad0a8d887] instance)
 2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
 9667a1d6-9883-429d-8b85-6f0ad0a8d887]   File
 /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py, line 482,
 in get_instance_nw_info
 2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
 9667a1d6-9883-429d-8b85-6f0ad0a8d887] port_ids)
 2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
 9667a1d6-9883-429d-8b85-6f0ad0a8d887]   File
 /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py, line 496,
 in _get_instance_nw_info
 2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
 9667a1d6-9883-429d-8b85-6f0ad0a8d887] port_ids)
 2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
 9667a1d6-9883-429d-8b85-6f0ad0a8d887]   File
 /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py, line
 1137, in _build_network_info_model
 2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
 9667a1d6-9883-429d-8b85-6f0ad0a8d887] data =
 client.list_ports(**search_opts)
 2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
 9667a1d6-9883-429d-8b85-6f0ad0a8d887]   File
 /usr/lib/python2.7/site-packages/nova/network/neutronv2/__init__.py, line
 81, in wrapper
 2015-06-16 16:08:22.314 23468 TRACE nova.compute.manager [instance:
 9667a1d6-9883-429d-8b85-6f0ad0a8d887] ret = obj(*args, **kwargs)



 Below are my setting:


 == Nova networks ==
 +--++--+
 | ID   | Label  | Cidr |
 +--++--+
 | 7d554063-8b7e-40a9-af02-738ca2b480a4 | net-ext| -|
 | f14cecdf-95de-424a-871e-f278e61a008b | Int-Subnet | -|
 

Re: [Openstack-operators] [openstack-dev] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Sam Morrison

 On 17 Jun 2015, at 8:35 pm, Neil Jerram neil.jer...@metaswitch.com wrote:
 
 Hi Sam,
 
 On 17/06/15 01:31, Sam Morrison wrote:
 We at NeCTAR are starting the transition to neutron from nova-net and 
 neutron almost does what we want.
 
 We have 10 “public networks and 10 “service networks and depending on 
 which compute node you land on you get attached to one of them.
 
 In neutron speak we have multiple shared externally routed provider 
 networks. We don’t have any tenant networks or any other fancy stuff yet.
 How I’ve currently got this set up is by creating 10 networks and subsequent 
 subnets eg. public-1, public-2, public-3 … and service-1, service-2, 
 service-3 and so on.
 
 In nova we have made a slight change in allocate for instance [1] whereby 
 the compute node has a designated hardcoded network_ids for the public and 
 service network it is physically attached to.
 We have also made changes in the nova API so users can’t select a network 
 and the neutron endpoint is not registered in keystone.
 
 That all works fine but ideally I want a user to be able to choose if they 
 want a public and or service network. We can’t let them as we have 10 public 
 networks, we almost need something in neutron like a network group” or 
 something that allows a user to select “public” and it allocates them a port 
 in one of the underlying public networks.
 
 This begs the question: why have you defined 10 public-N networks, instead of 
 just one public network?

I think this has all been answered but just in case.
There are multiple reasons. We don’t have a single IPv4 range big enough for 
our cloud, don’t want the broadcast domain too be massive, the compute nodes 
are in different data centres etc. etc.
Basically it’s not how our underlying physical network is set up and we can’t 
change that.

Sam


 
 I tried going down the route of having 1 public and 1 service network in 
 neutron then creating 10 subnets under each. That works until you get to 
 things like dhcp-agent and metadata agent although this looks like it could 
 work with a few minor changes. Basically I need a dhcp-agent to be spun up 
 per subnet and ensure they are spun up in the right place.
 
 Why the 10 subnets?  Is it to do with where you actually have real L2 
 segments, in your deployment?
 
 Thanks,
   Neil
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Kris G. Lindgren

On 6/17/15, 10:59 AM, Neil Jerram neil.jer...@metaswitch.com wrote:



On 17/06/15 16:17, Kris G. Lindgren wrote:
 See inline.
 

 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.



 On 6/17/15, 5:12 AM, Neil Jerram neil.jer...@metaswitch.com wrote:

 Hi Kris,

 Apologies in advance for questions that are probably really dumb - but
 there are several points here that I don't understand.

 On 17/06/15 03:44, Kris G. Lindgren wrote:
 We are doing pretty much the same thing - but in a slightly different
 way.
We extended the nova scheduler to help choose networks (IE. don't
put
 vm's on a network/host that doesn't have any available IP address).

 Why would a particular network/host not have any available IP address?

   If a created network has 1024 ip's on it (/22) and we provision 1020
vms,
   anything deployed after that will not have an additional ip address
 because
   the network doesn't have any available ip addresses (loose some ip's
to
   the network).

OK, thanks, that certainly explains the particular network possibility.

So I guess this applies where your preference would be for network A,
but it would be OK to fall back to network B, and so on.  That sounds
like it could be a useful general enhancement.

(But, if a new VM absolutely _has_ to be on, say, the 'production'
network, and the 'production' network is already fully used, you're
fundamentally stuck, aren't you?)

Yes - this would be a scheduling failure - and I am ok with that.  It does
no good to have a vm on a network that doesn't work.


What about the /host part?  Is it possible in your system for a
network to have IP addresses available, but for them not to be usable on
a particular host?

Yes this is also a possibility.  That the network allocated to a set of
hosts has IP's available but no compute capacity to spin up vms on it.
Again - I am ok with this.


 Then,
 we add into the host-aggregate that each HV is attached to a network
 metadata item which maps to the names of the neutron networks that
host
 supports.  This basically creates the mapping of which host supports
 what
 networks, so we can correctly filter hosts out during scheduling. We
do
 allow people to choose a network if they wish and we do have the
neutron
 end-point exposed. However, by default if they do not supply a boot
 command with a network, we will filter the networks down and choose
one
 for them.  That way they never hit [1].  This also works well for us,
 because the default UI that we provide our end-users is not horizon.

 Why do you define multiple networks - as opposed to just one - and why
 would one of your users want to choose a particular one of those?

 (Do you mean multiple as in public-1, public-2, ...; or multiple as in
 public, service, ...?)

   This is answered in the other email and original email as well.  But
 basically
   we have multiple L2 segments that only exists on certain switches and
 thus are
   only tied to certain hosts.  With the way neutron is currently
structured
 we
   need to create a network for each L2. So that¹s why we define multiple
 networks.

Thanks!  Ok, just to check that I really understand this:

- You have real L2 segments connecting some of your compute hosts
together - and also I guess to a ToR that does L3 to the rest of the
data center.

Correct.



- You presumably then just bridge all the TAP interfaces, on each host,
to the host's outwards-facing interface.

+ VM
|
+- Host + VM
|   |
|   + VM
|
|   + VM
|   |
+- Host + VM
|   |
ToR ---+   + VM
|
|   + VM
|   |
|- Host + VM
|
+ VM

Also correct, we are using flat provider networks (shared=true) -
however provider vlan networks would work as well.


- You specify each such setup as a network in the Neutron API - and
hence you have multiple similar networks, for your data center as a whole.

Out of interest, do you do this just because it's the Right Thing
according to the current Neutron API - i.e. because a Neutron network is
L2 - or also because it's needed in order to get the Neutron
implementation components that you use to work correctly?  For example,
so that you have a DHCP agent for each L2 network (if you use the
Neutron DHCP agent).

Somewhat both.  It was a how do I get neutron to handle this without
making drastic changes to the base level neutron concepts.  We currently
do have dhcp-agents and nova-metadata agent running in each L2 and we
specifically assign them to hosts in that L2 space.  We are currently
working on ways to remove this requirement.


   For our end users - they only care about getting a vm with a single ip
 address
   

[Openstack-operators] [Neutron] Metaplugin removal in Liberty

2015-06-17 Thread Hirofumi Ichihara
Hi Operator folks,
(I apologize for overlapping mail)

Does anyone use metaplugin of Neutron?
I want to remove it.

I have maintained metaplugin in Neutron.
The plugin enables users to use multiple plugins at the same time.
However, we can use ML2 plugin in order to do now.
I know that there is nobody using metaplugin and besides, It’s still 
experimental[1].
Therefore, I want to remove metapugin in Liberty cycle by a patch[2]
although it must be maintained in Liberty according to cycle deprecating plugin.

Could you respond me if you disagree with the proposal?

[1]: 
http://docs.openstack.org/admin-guide-cloud/content/section_limitations.html 
http://docs.openstack.org/admin-guide-cloud/content/section_limitations.html
[2]: https://review.openstack.org/#/c/192056/ 
https://review.openstack.org/#/c/192056/

thanks,
Hiro___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Neutron] Metaplugin removal in Liberty

2015-06-17 Thread Edgar Magana
I haven’t seen anyone rally using it. I vote for removing it from tree.

Edgar

From: Hirofumi Ichihara
Date: Wednesday, June 17, 2015 at 4:37 PM
To: 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
Subject: [Openstack-operators] [Neutron] Metaplugin removal in Liberty

Hi Operator folks,
(I apologize for overlapping mail)

Does anyone use metaplugin of Neutron?
I want to remove it.

I have maintained metaplugin in Neutron.
The plugin enables users to use multiple plugins at the same time.
However, we can use ML2 plugin in order to do now.
I know that there is nobody using metaplugin and besides, It’s still 
experimental[1].
Therefore, I want to remove metapugin in Liberty cycle by a patch[2]
although it must be maintained in Liberty according to cycle deprecating plugin.

Could you respond me if you disagree with the proposal?

[1]: 
http://docs.openstack.org/admin-guide-cloud/content/section_limitations.html
[2]: https://review.openstack.org/#/c/192056/

thanks,
Hiro
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Neil Jerram

Hi Kris,

Apologies in advance for questions that are probably really dumb - but 
there are several points here that I don't understand.


On 17/06/15 03:44, Kris G. Lindgren wrote:

We are doing pretty much the same thing - but in a slightly different way.
  We extended the nova scheduler to help choose networks (IE. don't put
vm's on a network/host that doesn't have any available IP address).


Why would a particular network/host not have any available IP address?


Then,
we add into the host-aggregate that each HV is attached to a network
metadata item which maps to the names of the neutron networks that host
supports.  This basically creates the mapping of which host supports what
networks, so we can correctly filter hosts out during scheduling. We do
allow people to choose a network if they wish and we do have the neutron
end-point exposed. However, by default if they do not supply a boot
command with a network, we will filter the networks down and choose one
for them.  That way they never hit [1].  This also works well for us,
because the default UI that we provide our end-users is not horizon.


Why do you define multiple networks - as opposed to just one - and why 
would one of your users want to choose a particular one of those?


(Do you mean multiple as in public-1, public-2, ...; or multiple as in 
public, service, ...?)



We currently only support one network per HV via this configuration, but
we would like to be able to expose a network type or group via neutron
in the future.

I believe what you described below is also another way of phrasing the ask
that we had in [2].  That you want to define multiple top level networks
in neutron: 'public' and 'service'.  That is made up by multiple desperate


desperate? :-)  I assume you probably meant separate here.


L2 networks: 'public-1', 'public2,' ect which are independently
constrained to a specific set of hosts/switches/datacenter.


If I'm understanding correctly, this is one of those places where I get 
confused about the difference between Neutron-as-an-API and 
Neutron-as-a-software-implementation.  I guess what you mean here is 
that your deployment hardware is really providing those L2 segments 
directly, and hence you aren't using Neutron's software-based simulation 
of L2 segments.  Is that right?



We have talked about working around this under our configuration one of
two ways.  First, is to use availability zones to provide the separation
between: 'public' and 'service', or in our case: 'prod', 'pki','internal',
ect, ect.


Why are availability zones involved here?  Assuming you had 'prod', 
'pki','internal' etc. networks set up and represented as such in 
Neutron, why wouldn't you just say which of those networks each instance 
should connect to, when creating each instance?


Regards,
Neil


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Shared storage for live-migration with NFS, Ceph and Lustre

2015-06-17 Thread Miguel A Diaz Corchero

Hi friends.

I'm evaluating different DFS to increase our infrastructure from 10 
nodes to 40 nodes approximately. One of the bottleneck is the shared 
storage installed to enable the live-migration.
Well, the selected candidate are NFS, Ceph or Lustre (which is already 
installed for HPC purpose).


Creating a brief planning and avoiding network connectivities:

*a)* with NFS and Ceph, I think it is possible but dividing the whole 
infrastructure (40 nodes) in smaller clusters, for instance; 10 nodes 
with 1 storage each one. Obviously, the live-migration is only possible 
between nodes on the same cluster (or zone)


*b) *with Lustre, my idea is to connect all the nodes (40 nodes) to the 
same lustre (MDS) and use all the concurrency advantages of the 
storage.  In this case, the live migration could be possible among all 
the nodes.


I would like to ask you for any idea, comment or experience. I think the 
most untested case is b), but has anyone tried to use Lustre in a 
similar scenario? Any comment in any case a) o b) are appreciated.


Thanks
Miguel.


--
/Miguel Angel Díaz Corchero/
/*System Administrator / Researcher*/
/c/ Sola nº 1; 10200 TRUJILLO, SPAIN/
/Tel: +34 927 65 93 17 Fax: +34 927 32 32 37/

CETA-Ciemat logo http://www.ceta-ciemat.es/



Confidencialidad: 
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario y puede contener información privilegiada o confidencial. Si no es vd. el destinatario indicado, queda notificado de que la utilización, divulgación y/o copia sin autorización está prohibida en virtud de la legislación vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente respondiendo al mensaje y proceda a su destrucción.


Disclaimer: 
This message and its attached files is intended exclusively for its recipients and may contain confidential information. If you received this e-mail in error you are hereby notified that any dissemination, copy or disclosure of this communication is strictly prohibited and may be unlawful. In this case, please notify us by a reply and delete this email and its contents immediately. 



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Neil Jerram
[Sorry - unintentionally dropped -operators below; adding it back in 
this copy.]


On 17/06/15 11:35, Neil Jerram wrote:

Hi Sam,

On 17/06/15 01:31, Sam Morrison wrote:

We at NeCTAR are starting the transition to neutron from nova-net and
neutron almost does what we want.

We have 10 “public networks and 10 “service networks and depending
on which compute node you land on you get attached to one of them.

In neutron speak we have multiple shared externally routed provider
networks. We don’t have any tenant networks or any other fancy stuff yet.
How I’ve currently got this set up is by creating 10 networks and
subsequent subnets eg. public-1, public-2, public-3 … and service-1,
service-2, service-3 and so on.

In nova we have made a slight change in allocate for instance [1]
whereby the compute node has a designated hardcoded network_ids for
the public and service network it is physically attached to.
We have also made changes in the nova API so users can’t select a
network and the neutron endpoint is not registered in keystone.

That all works fine but ideally I want a user to be able to choose if
they want a public and or service network. We can’t let them as we
have 10 public networks, we almost need something in neutron like a
network group” or something that allows a user to select “public” and
it allocates them a port in one of the underlying public networks.


This begs the question: why have you defined 10 public-N networks,
instead of just one public network?


I tried going down the route of having 1 public and 1 service network
in neutron then creating 10 subnets under each. That works until you
get to things like dhcp-agent and metadata agent although this looks
like it could work with a few minor changes. Basically I need a
dhcp-agent to be spun up per subnet and ensure they are spun up in the
right place.


Why the 10 subnets?  Is it to do with where you actually have real L2
segments, in your deployment?

Thanks,
 Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Could not get IP to instance through neutron

2015-06-17 Thread Eren Türkay
On 17-06-2015 02:08, pra devOPS wrote:
 Stderr: 'ovs-ofctl: br-int is not a bridge or a socket\n'

I think this error is clear. br-int seems to be created in a wrong way. How did
you create this interface?

-- 
Eren Türkay, System Administrator
https://skyatlas.com/ | +90 850 885 0357

Yildiz Teknik Universitesi Davutpasa Kampusu
Teknopark Bolgesi, D2 Blok No:107
Esenler, Istanbul Pk.34220



signature.asc
Description: OpenPGP digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Kyle Mestery
On Wed, Jun 17, 2015 at 1:59 AM, Armando M. arma...@gmail.com wrote:



 On 16 June 2015 at 22:36, Sam Morrison sorri...@gmail.com wrote:


 On 17 Jun 2015, at 10:56 am, Armando M. arma...@gmail.com wrote:



 On 16 June 2015 at 17:31, Sam Morrison sorri...@gmail.com wrote:

 We at NeCTAR are starting the transition to neutron from nova-net and
 neutron almost does what we want.

 We have 10 “public networks and 10 “service networks and depending on
 which compute node you land on you get attached to one of them.

 In neutron speak we have multiple shared externally routed provider
 networks. We don’t have any tenant networks or any other fancy stuff yet.
 How I’ve currently got this set up is by creating 10 networks and
 subsequent subnets eg. public-1, public-2, public-3 … and service-1,
 service-2, service-3 and so on.

 In nova we have made a slight change in allocate for instance [1]
 whereby the compute node has a designated hardcoded network_ids for the
 public and service network it is physically attached to.
 We have also made changes in the nova API so users can’t select a
 network and the neutron endpoint is not registered in keystone.

 That all works fine but ideally I want a user to be able to choose if
 they want a public and or service network. We can’t let them as we have 10
 public networks, we almost need something in neutron like a network group”
 or something that allows a user to select “public” and it allocates them a
 port in one of the underlying public networks.

 I tried going down the route of having 1 public and 1 service network in
 neutron then creating 10 subnets under each. That works until you get to
 things like dhcp-agent and metadata agent although this looks like it could
 work with a few minor changes. Basically I need a dhcp-agent to be spun up
 per subnet and ensure they are spun up in the right place.

 I’m not sure what the correct way of doing this. What are other people
 doing in the interim until this kind of use case can be done in Neutron?


 Would something like [1] be adequate to address your use case? If not,
 I'd suggest you to file an RFE bug (more details in [2]), so that we can
 keep the discussion focused on this specific case.

 HTH
 Armando

 [1] https://blueprints.launchpad.net/neutron/+spec/rbac-networks


 That’s not applicable in this case. We don’t care about what tenants are
 when in this case.

 [2]
 https://github.com/openstack/neutron/blob/master/doc/source/policies/blueprints.rst#neutron-request-for-feature-enhancements


 The bug Kris mentioned outlines all I want too I think.


 I don't know what you're referring to.



Armando, I think this is the bug he's referring to:

https://bugs.launchpad.net/neutron/+bug/1458890

This is something I'd like to look at next week during the mid-cycle,
especially since Carl is there and his spec for routed networks [2] covers
a lot of these use cases.

[2] https://review.openstack.org/#/c/172244/



 Sam






 Cheers,
 Sam

 [1]
 https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c4635afb12



  On 17 Jun 2015, at 12:20 am, Jay Pipes jaypi...@gmail.com wrote:
 
  Adding -dev because of the reference to the Neutron Get me a network
 spec. Also adding [nova] and [neutron] subject markers.
 
  Comments inline, Kris.
 
  On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
  During the Openstack summit this week I got to talk to a number of
 other
  operators of large Openstack deployments about how they do networking.
   I was happy, surprised even, to find that a number of us are using a
  similar type of networking strategy.  That we have similar challenges
  around networking and are solving it in our own but very similar way.
   It is always nice to see that other people are doing the same things
  as you or see the same issues as you are and that you are not crazy.
  So in that vein, I wanted to reach out to the rest of the Ops
 Community
  and ask one pretty simple question.
 
  Would it be accurate to say that most of your end users want almost
  nothing to do with the network?
 
  That was my experience at ATT, yes. The vast majority of end users
 could not care less about networking, as long as the connectivity was
 reliable, performed well, and they could connect to the Internet (and have
 others connect from the Internet to their VMs) when needed.
 
  In my experience what the majority of them (both internal and
 external)
  want is to consume from Openstack a compute resource, a property of
  which is it that resource has an IP address.  They, at most, care
 about
  which network they are on.  Where a network is usually an
 arbitrary
  definition around a set of real networks, that are constrained to a
  location, in which the company has attached some sort of policy.  For
  example, I want to be in the production network vs's the xyz lab
  network, vs's the backup network, vs's the corp network.  I would say
  for Godaddy, 99% of our use cases would be defined as: I want a
 

Re: [Openstack-operators] [openstack-dev] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Kris G. Lindgren
While I didn't know about the Neutron mid-cycle being next week.  I do happen 
to live in Fort Collins, so I could easy become available if you want to talk 
face-to-face about https://bugs.launchpad.net/neutron/+bug/1458890.


Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.

From: Kyle Mestery mest...@mestery.commailto:mest...@mestery.com
Date: Wednesday, June 17, 2015 at 7:08 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-...@lists.openstack.orgmailto:openstack-...@lists.openstack.org
Cc: 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] [nova] [neutron] Re: How do 
your end users use networking?

On Wed, Jun 17, 2015 at 1:59 AM, Armando M. 
arma...@gmail.commailto:arma...@gmail.com wrote:


On 16 June 2015 at 22:36, Sam Morrison 
sorri...@gmail.commailto:sorri...@gmail.com wrote:

On 17 Jun 2015, at 10:56 am, Armando M. 
arma...@gmail.commailto:arma...@gmail.com wrote:



On 16 June 2015 at 17:31, Sam Morrison 
sorri...@gmail.commailto:sorri...@gmail.com wrote:
We at NeCTAR are starting the transition to neutron from nova-net and neutron 
almost does what we want.

We have 10 “public networks and 10 “service networks and depending on which 
compute node you land on you get attached to one of them.

In neutron speak we have multiple shared externally routed provider networks. 
We don’t have any tenant networks or any other fancy stuff yet.
How I’ve currently got this set up is by creating 10 networks and subsequent 
subnets eg. public-1, public-2, public-3 … and service-1, service-2, service-3 
and so on.

In nova we have made a slight change in allocate for instance [1] whereby the 
compute node has a designated hardcoded network_ids for the public and service 
network it is physically attached to.
We have also made changes in the nova API so users can’t select a network and 
the neutron endpoint is not registered in keystone.

That all works fine but ideally I want a user to be able to choose if they want 
a public and or service network. We can’t let them as we have 10 public 
networks, we almost need something in neutron like a network group” or 
something that allows a user to select “public” and it allocates them a port in 
one of the underlying public networks.

I tried going down the route of having 1 public and 1 service network in 
neutron then creating 10 subnets under each. That works until you get to things 
like dhcp-agent and metadata agent although this looks like it could work with 
a few minor changes. Basically I need a dhcp-agent to be spun up per subnet and 
ensure they are spun up in the right place.

I’m not sure what the correct way of doing this. What are other people doing in 
the interim until this kind of use case can be done in Neutron?

Would something like [1] be adequate to address your use case? If not, I'd 
suggest you to file an RFE bug (more details in [2]), so that we can keep the 
discussion focused on this specific case.

HTH
Armando

[1] https://blueprints.launchpad.net/neutron/+spec/rbac-networks

That’s not applicable in this case. We don’t care about what tenants are when 
in this case.

[2] 
https://github.com/openstack/neutron/blob/master/doc/source/policies/blueprints.rst#neutron-request-for-feature-enhancements

The bug Kris mentioned outlines all I want too I think.

I don't know what you're referring to.


Armando, I think this is the bug he's referring to:

https://bugs.launchpad.net/neutron/+bug/1458890

This is something I'd like to look at next week during the mid-cycle, 
especially since Carl is there and his spec for routed networks [2] covers a 
lot of these use cases.

[2] https://review.openstack.org/#/c/172244/


Sam





Cheers,
Sam

[1] 
https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c4635afb12



 On 17 Jun 2015, at 12:20 am, Jay Pipes 
 jaypi...@gmail.commailto:jaypi...@gmail.com wrote:

 Adding -dev because of the reference to the Neutron Get me a network spec. 
 Also adding [nova] and [neutron] subject markers.

 Comments inline, Kris.

 On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
 During the Openstack summit this week I got to talk to a number of other
 operators of large Openstack deployments about how they do networking.
  I was happy, surprised even, to find that a number of us are using a
 similar type of networking strategy.  That we have similar challenges
 around networking and are solving it in our own but very similar way.
  It is always nice to see that other people are doing the same things
 as you or see the same issues as you are and that you are not crazy.
 So in that vein, I wanted to reach out to the rest of the Ops Community
 and ask one pretty simple question.

 Would it be accurate to say that most of your end users want almost
 

Re: [Openstack-operators] [openstack-dev] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Kyle Mestery
Great! I'll reach out to you in unicast mode on this Kris, thanks!

On Wed, Jun 17, 2015 at 10:27 AM, Kris G. Lindgren klindg...@godaddy.com
wrote:

  While I didn't know about the Neutron mid-cycle being next week.  I do
 happen to live in Fort Collins, so I could easy become available if you
 want to talk face-to-face about
 https://bugs.launchpad.net/neutron/+bug/1458890.
  

 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.

   From: Kyle Mestery mest...@mestery.com
 Date: Wednesday, June 17, 2015 at 7:08 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-...@lists.openstack.org
 Cc: openstack-operators@lists.openstack.org 
 openstack-operators@lists.openstack.org
 Subject: Re: [Openstack-operators] [openstack-dev] [nova] [neutron] Re:
 How do your end users use networking?

On Wed, Jun 17, 2015 at 1:59 AM, Armando M. arma...@gmail.com wrote:



 On 16 June 2015 at 22:36, Sam Morrison sorri...@gmail.com wrote:


  On 17 Jun 2015, at 10:56 am, Armando M. arma...@gmail.com wrote:



 On 16 June 2015 at 17:31, Sam Morrison sorri...@gmail.com wrote:

 We at NeCTAR are starting the transition to neutron from nova-net and
 neutron almost does what we want.

 We have 10 “public networks and 10 “service networks and depending on
 which compute node you land on you get attached to one of them.

 In neutron speak we have multiple shared externally routed provider
 networks. We don’t have any tenant networks or any other fancy stuff yet.
 How I’ve currently got this set up is by creating 10 networks and
 subsequent subnets eg. public-1, public-2, public-3 … and service-1,
 service-2, service-3 and so on.

 In nova we have made a slight change in allocate for instance [1]
 whereby the compute node has a designated hardcoded network_ids for the
 public and service network it is physically attached to.
 We have also made changes in the nova API so users can’t select a
 network and the neutron endpoint is not registered in keystone.

 That all works fine but ideally I want a user to be able to choose if
 they want a public and or service network. We can’t let them as we have 10
 public networks, we almost need something in neutron like a network group”
 or something that allows a user to select “public” and it allocates them a
 port in one of the underlying public networks.

 I tried going down the route of having 1 public and 1 service network
 in neutron then creating 10 subnets under each. That works until you get to
 things like dhcp-agent and metadata agent although this looks like it could
 work with a few minor changes. Basically I need a dhcp-agent to be spun up
 per subnet and ensure they are spun up in the right place.

 I’m not sure what the correct way of doing this. What are other people
 doing in the interim until this kind of use case can be done in Neutron?


  Would something like [1] be adequate to address your use case? If not,
 I'd suggest you to file an RFE bug (more details in [2]), so that we can
 keep the discussion focused on this specific case.

  HTH
 Armando

  [1] https://blueprints.launchpad.net/neutron/+spec/rbac-networks


  That’s not applicable in this case. We don’t care about what tenants
 are when in this case.

[2]
 https://github.com/openstack/neutron/blob/master/doc/source/policies/blueprints.rst#neutron-request-for-feature-enhancements


  The bug Kris mentioned outlines all I want too I think.


  I don't know what you're referring to.



  Armando, I think this is the bug he's referring to:

 https://bugs.launchpad.net/neutron/+bug/1458890

  This is something I'd like to look at next week during the mid-cycle,
 especially since Carl is there and his spec for routed networks [2] covers
 a lot of these use cases.

 [2] https://review.openstack.org/#/c/172244/



  Sam






 Cheers,
 Sam

 [1]
 https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c4635afb12



  On 17 Jun 2015, at 12:20 am, Jay Pipes jaypi...@gmail.com wrote:
 
  Adding -dev because of the reference to the Neutron Get me a network
 spec. Also adding [nova] and [neutron] subject markers.
 
  Comments inline, Kris.
 
  On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
  During the Openstack summit this week I got to talk to a number of
 other
  operators of large Openstack deployments about how they do
 networking.
   I was happy, surprised even, to find that a number of us are using a
  similar type of networking strategy.  That we have similar challenges
  around networking and are solving it in our own but very similar way.
   It is always nice to see that other people are doing the same things
  as you or see the same issues as you are and that you are not
 crazy.
  So in that vein, I wanted to reach out to the rest of the Ops
 Community
  and ask one pretty simple question.
 
  Would it be accurate to say that most of your end users want almost
  nothing to do with the network?
 
  That was my 

Re: [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Neil Jerram



On 17/06/15 16:17, Kris G. Lindgren wrote:

See inline.


Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.



On 6/17/15, 5:12 AM, Neil Jerram neil.jer...@metaswitch.com wrote:


Hi Kris,

Apologies in advance for questions that are probably really dumb - but
there are several points here that I don't understand.

On 17/06/15 03:44, Kris G. Lindgren wrote:

We are doing pretty much the same thing - but in a slightly different
way.
   We extended the nova scheduler to help choose networks (IE. don't put
vm's on a network/host that doesn't have any available IP address).


Why would a particular network/host not have any available IP address?


  If a created network has 1024 ip's on it (/22) and we provision 1020 vms,
  anything deployed after that will not have an additional ip address
because
  the network doesn't have any available ip addresses (loose some ip's to
  the network).


OK, thanks, that certainly explains the particular network possibility.

So I guess this applies where your preference would be for network A, 
but it would be OK to fall back to network B, and so on.  That sounds 
like it could be a useful general enhancement.


(But, if a new VM absolutely _has_ to be on, say, the 'production' 
network, and the 'production' network is already fully used, you're 
fundamentally stuck, aren't you?)


What about the /host part?  Is it possible in your system for a 
network to have IP addresses available, but for them not to be usable on 
a particular host?



Then,
we add into the host-aggregate that each HV is attached to a network
metadata item which maps to the names of the neutron networks that host
supports.  This basically creates the mapping of which host supports
what
networks, so we can correctly filter hosts out during scheduling. We do
allow people to choose a network if they wish and we do have the neutron
end-point exposed. However, by default if they do not supply a boot
command with a network, we will filter the networks down and choose one
for them.  That way they never hit [1].  This also works well for us,
because the default UI that we provide our end-users is not horizon.


Why do you define multiple networks - as opposed to just one - and why
would one of your users want to choose a particular one of those?

(Do you mean multiple as in public-1, public-2, ...; or multiple as in
public, service, ...?)


  This is answered in the other email and original email as well.  But
basically
  we have multiple L2 segments that only exists on certain switches and
thus are
  only tied to certain hosts.  With the way neutron is currently structured
we
  need to create a network for each L2. So that¹s why we define multiple
networks.


Thanks!  Ok, just to check that I really understand this:

- You have real L2 segments connecting some of your compute hosts 
together - and also I guess to a ToR that does L3 to the rest of the 
data center.


- You presumably then just bridge all the TAP interfaces, on each host, 
to the host's outwards-facing interface.


   + VM
   |
   +- Host + VM
   |   |
   |   + VM
   |
   |   + VM
   |   |
   +- Host + VM
   |   |
ToR ---+   + VM
   |
   |   + VM
   |   |
   |- Host + VM
   |
   + VM

- You specify each such setup as a network in the Neutron API - and 
hence you have multiple similar networks, for your data center as a whole.


Out of interest, do you do this just because it's the Right Thing 
according to the current Neutron API - i.e. because a Neutron network is 
L2 - or also because it's needed in order to get the Neutron 
implementation components that you use to work correctly?  For example, 
so that you have a DHCP agent for each L2 network (if you use the 
Neutron DHCP agent).



  For our end users - they only care about getting a vm with a single ip
address
  in a network which is really a zone like prod or dev or test.
They stop
  caring after that point.  So in the scheduler filter that we created we
do
  exactly that.  We will filter down from all the hosts and networks down
to a
  combo that intersects at a host that has space, with a network that has
space,
  And the network that was chosen is actually available to that host.


Thanks, makes perfect sense now.

So I think there are two possible representations, overall, of what you 
are looking for.


1. A 'network group' of similar L2 networks.  When a VM is launched, 
tenant specifies the network group instead of a particular L2 network, 
and Nova/Neutron select a host and network with available compute power 
and IP addressing.  This sounds like what you've described above.


2. A new kind of network whose ports are partitioned into various L2