Re: [openstack-dev] [neutron][routed-network] Host doesn't connected any segments when creating port

2016-11-01 Thread shihanzhang
agree with Neil.


thanks
shihanzhang




在 2016-11-01 17:13:54,"Neil Jerram"  写道:

Hi Zhi Chang,


I believe the answer is that the physical network (aka fabric) should provide 
routing between those two subnets. This routing between segments is implicit in 
the idea of a multi-segment network, and is entirely independent of routing 
between virtual _networks_ (which is done by a Neutron router object connecting 
those networks).


Hope that helps! 
Neil 
 


|
From: zhi
Sent: Tuesday, 1 November 2016 07:50
To: OpenStack Development Mailing List (not for usage questions)
Reply To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][routed-network] Host doesn't connected 
any segments when creating port
|


Hi, shihanzhang.


I still have a question about routed network. I have two subnets. One is 
10.1.0.0/24 and the other is 10.1.1.0/24. I create two instances in each host.
Such as 10.1.0.10 and 10.1.1.10.


My question is, how does 10.1.0.10 connect to 10.1.1.10 ?  There is no any 
gateway( 10.1.0.1 and 10.1.1.1 ) in each subnet.




Hope four your reply. ;-)




Thanks
Zhi Chang


2016-11-01 14:31 GMT+08:00 zhi :

Hi, shihanzhang.


Thanks for your advice. Now I can created ports successfully by your advice. 






Thanks
Zhi Chang




2016-11-01 12:50 GMT+08:00 shihanzhang :

Hi Zhi Chang,
Maybe you should add a config option in [ovs] section: bridge_mappings = 
public:br-ex, physnet1:br-physnet1 to handle the provider network 'physnet1'.


Thanks,
shihanzhang

At 2016-11-01 11:56:33, "zhi"  wrote:

hi shihanzhang.


Below is configuration in ml2_conf.ini. Please review it. :)


stack@devstack:~/neutron/neutron$ cat /etc/neutron/plugins/ml2/ml2_conf.ini 
|grep -v "#"|grep -v ^$
[DEFAULT]
[ml2]
tenant_network_types = vxlan
extension_drivers = port_security
mechanism_drivers = openvswitch,linuxbridge
tenant_network_types = vxlan
[ml2_type_flat]
flat_networks = public,public,
[ml2_type_geneve]
vni_ranges = 1:1000
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vlan]
network_vlan_ranges = physnet1,physnet2
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent]
tunnel_types = vxlan
root_helper_daemon = sudo /usr/local/bin/neutron-rootwrap-daemon 
/etc/neutron/rootwrap.conf
root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[ovs]
datapath_type = system
bridge_mappings = public:br-ex
tunnel_bridge = br-tun
local_ip = 192.168.10.20






Thanks
Zhi Chang


2016-11-01 9:15 GMT+08:00 shihanzhang :

hi Zhi chang,

Could you provide your ml2_conf.ini for ovs agent, I guess the reason is that 
your ovs-agent on host devstack can't handle the related segment id.


Thanks,
shihanzhang


在 2016-10-31 18:43:36,"zhi"  写道:

Hi, all.


Recently, I watch the OpenStack Summit video named ' Scaling Up OpenStack 
Networking with Routed Networks '.  Carl and Miguel made this topic.  I learned 
a lot of your topic. Thanks. 


But I have some questions about the demo in the topic. 


I do some steps according to the topic. 


First, creating two networks like this:


neutron net-create multinet --shared --segments type=dict list=true 
provider:physical_network=physnet1,provider:segmentation_id=2016,provider:network_type=vlan
 
provider:physical_network=physnet2,provider:segmentation_id=2016,provider:network_type=vlan




Second, I get two segments after creating this network. I get these segments by 
using " openstack network segment list ". 


Third, I create two subnets by these segments by using this command " 


neutron subnet-create --ip_version 4 --name multi-segment1-subnet [net-id] 
10.1.0.0/24 --segment_id [segment-id]
neutron subnet-create --ip_version 4 --name multi-segment2-subnet [net-id] 
10.1.1.0/24 --segment_id [segment-id]
 "


At last, I want to create a port with host_id. My local environment contains 
two compute nodes, one is named "devstack"  and the other is "devstack2". So I 
use this command "  neutron port-create --binding:host_id=devstack [net-id] ". 


Exception happens in neutron server. The exception says "


Host devstack is not connected to any segments on routed provider network 
[net-id].  It should be connected to one." I can not get this exact point about 
this exception. 


Why does the "routed network" have relationship with host? 


How they work together between "host info (compute node ?)" and "routed 
network"?


What should I do if I want to get rid of this exception?




Hope for your reply. Especially Carl and Miguel. ;-)






Many Thanks
Zhi Chang




 


__
OpenStack Development Mailing List (not f

Re: [openstack-dev] [neutron][routed-network] Host doesn't connected any segments when creating port

2016-11-01 Thread shihanzhang
Hi Zhi Chang,
You also need to connect these two subnets to a router.


Thanks,
shihanzhang

在 2016-11-01 15:47:57,"zhi"  写道:

Hi, shihanzhang.


I still have a question about routed network. I have two subnets. One is 
10.1.0.0/24 and the other is 10.1.1.0/24. I create two instances in each host.
Such as 10.1.0.10 and 10.1.1.10.


My question is, how does 10.1.0.10 connect to 10.1.1.10 ?  There is no any 
gateway( 10.1.0.1 and 10.1.1.1 ) in each subnet.




Hope four your reply. ;-)




Thanks
Zhi Chang


2016-11-01 14:31 GMT+08:00 zhi :

Hi, shihanzhang.


Thanks for your advice. Now I can created ports successfully by your advice. 






Thanks
Zhi Chang




2016-11-01 12:50 GMT+08:00 shihanzhang :

Hi Zhi Chang,
Maybe you should add a config option in [ovs] section: bridge_mappings = 
public:br-ex, physnet1:br-physnet1 to handle the provider network 'physnet1'.


Thanks,
shihanzhang

At 2016-11-01 11:56:33, "zhi"  wrote:

hi shihanzhang.


Below is configuration in ml2_conf.ini. Please review it. :)


stack@devstack:~/neutron/neutron$ cat /etc/neutron/plugins/ml2/ml2_conf.ini 
|grep -v "#"|grep -v ^$
[DEFAULT]
[ml2]
tenant_network_types = vxlan
extension_drivers = port_security
mechanism_drivers = openvswitch,linuxbridge
tenant_network_types = vxlan
[ml2_type_flat]
flat_networks = public,public,
[ml2_type_geneve]
vni_ranges = 1:1000
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vlan]
network_vlan_ranges = physnet1,physnet2
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent]
tunnel_types = vxlan
root_helper_daemon = sudo /usr/local/bin/neutron-rootwrap-daemon 
/etc/neutron/rootwrap.conf
root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[ovs]
datapath_type = system
bridge_mappings = public:br-ex
tunnel_bridge = br-tun
local_ip = 192.168.10.20






Thanks
Zhi Chang


2016-11-01 9:15 GMT+08:00 shihanzhang :

hi Zhi chang,

Could you provide your ml2_conf.ini for ovs agent, I guess the reason is that 
your ovs-agent on host devstack can't handle the related segment id.


Thanks,
shihanzhang


在 2016-10-31 18:43:36,"zhi"  写道:

Hi, all.


Recently, I watch the OpenStack Summit video named ' Scaling Up OpenStack 
Networking with Routed Networks '.  Carl and Miguel made this topic.  I learned 
a lot of your topic. Thanks. 


But I have some questions about the demo in the topic. 


I do some steps according to the topic. 


First, creating two networks like this:


neutron net-create multinet --shared --segments type=dict list=true 
provider:physical_network=physnet1,provider:segmentation_id=2016,provider:network_type=vlan
 
provider:physical_network=physnet2,provider:segmentation_id=2016,provider:network_type=vlan




Second, I get two segments after creating this network. I get these segments by 
using " openstack network segment list ". 


Third, I create two subnets by these segments by using this command " 


neutron subnet-create --ip_version 4 --name multi-segment1-subnet [net-id] 
10.1.0.0/24 --segment_id [segment-id]
neutron subnet-create --ip_version 4 --name multi-segment2-subnet [net-id] 
10.1.1.0/24 --segment_id [segment-id]
 "


At last, I want to create a port with host_id. My local environment contains 
two compute nodes, one is named "devstack"  and the other is "devstack2". So I 
use this command "  neutron port-create --binding:host_id=devstack [net-id] ". 


Exception happens in neutron server. The exception says "


Host devstack is not connected to any segments on routed provider network 
[net-id].  It should be connected to one." I can not get this exact point about 
this exception. 


Why does the "routed network" have relationship with host? 


How they work together between "host info (compute node ?)" and "routed 
network"?


What should I do if I want to get rid of this exception?




Hope for your reply. Especially Carl and Miguel. ;-)






Many Thanks
Zhi Chang




 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev








 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][routed-network] Host doesn't connected any segments when creating port

2016-10-31 Thread shihanzhang
Hi Zhi Chang,
Maybe you should add a config option in [ovs] section: bridge_mappings = 
public:br-ex, physnet1:br-physnet1 to handle the provider network 'physnet1'.


Thanks,
shihanzhang

At 2016-11-01 11:56:33, "zhi"  wrote:

hi shihanzhang.


Below is configuration in ml2_conf.ini. Please review it. :)


stack@devstack:~/neutron/neutron$ cat /etc/neutron/plugins/ml2/ml2_conf.ini 
|grep -v "#"|grep -v ^$
[DEFAULT]
[ml2]
tenant_network_types = vxlan
extension_drivers = port_security
mechanism_drivers = openvswitch,linuxbridge
tenant_network_types = vxlan
[ml2_type_flat]
flat_networks = public,public,
[ml2_type_geneve]
vni_ranges = 1:1000
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vlan]
network_vlan_ranges = physnet1,physnet2
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent]
tunnel_types = vxlan
root_helper_daemon = sudo /usr/local/bin/neutron-rootwrap-daemon 
/etc/neutron/rootwrap.conf
root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[ovs]
datapath_type = system
bridge_mappings = public:br-ex
tunnel_bridge = br-tun
local_ip = 192.168.10.20






Thanks
Zhi Chang


2016-11-01 9:15 GMT+08:00 shihanzhang :

hi Zhi chang,

Could you provide your ml2_conf.ini for ovs agent, I guess the reason is that 
your ovs-agent on host devstack can't handle the related segment id.


Thanks,
shihanzhang


在 2016-10-31 18:43:36,"zhi"  写道:

Hi, all.


Recently, I watch the OpenStack Summit video named ' Scaling Up OpenStack 
Networking with Routed Networks '.  Carl and Miguel made this topic.  I learned 
a lot of your topic. Thanks. 


But I have some questions about the demo in the topic. 


I do some steps according to the topic. 


First, creating two networks like this:


neutron net-create multinet --shared --segments type=dict list=true 
provider:physical_network=physnet1,provider:segmentation_id=2016,provider:network_type=vlan
 
provider:physical_network=physnet2,provider:segmentation_id=2016,provider:network_type=vlan




Second, I get two segments after creating this network. I get these segments by 
using " openstack network segment list ". 


Third, I create two subnets by these segments by using this command " 


neutron subnet-create --ip_version 4 --name multi-segment1-subnet [net-id] 
10.1.0.0/24 --segment_id [segment-id]
neutron subnet-create --ip_version 4 --name multi-segment2-subnet [net-id] 
10.1.1.0/24 --segment_id [segment-id]
 "


At last, I want to create a port with host_id. My local environment contains 
two compute nodes, one is named "devstack"  and the other is "devstack2". So I 
use this command "  neutron port-create --binding:host_id=devstack [net-id] ". 


Exception happens in neutron server. The exception says "


Host devstack is not connected to any segments on routed provider network 
[net-id].  It should be connected to one." I can not get this exact point about 
this exception. 


Why does the "routed network" have relationship with host? 


How they work together between "host info (compute node ?)" and "routed 
network"?


What should I do if I want to get rid of this exception?




Hope for your reply. Especially Carl and Miguel. ;-)






Many Thanks
Zhi Chang




 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][routed-network] Host doesn't connected any segments when creating port

2016-10-31 Thread shihanzhang
hi Zhi chang,

Could you provide your ml2_conf.ini for ovs agent, I guess the reason is that 
your ovs-agent on host devstack can't handle the related segment id.


Thanks,
shihanzhang


在 2016-10-31 18:43:36,"zhi"  写道:

Hi, all.


Recently, I watch the OpenStack Summit video named ' Scaling Up OpenStack 
Networking with Routed Networks '.  Carl and Miguel made this topic.  I learned 
a lot of your topic. Thanks. 


But I have some questions about the demo in the topic. 


I do some steps according to the topic. 


First, creating two networks like this:


neutron net-create multinet --shared --segments type=dict list=true 
provider:physical_network=physnet1,provider:segmentation_id=2016,provider:network_type=vlan
 
provider:physical_network=physnet2,provider:segmentation_id=2016,provider:network_type=vlan




Second, I get two segments after creating this network. I get these segments by 
using " openstack network segment list ". 


Third, I create two subnets by these segments by using this command " 


neutron subnet-create --ip_version 4 --name multi-segment1-subnet [net-id] 
10.1.0.0/24 --segment_id [segment-id]
neutron subnet-create --ip_version 4 --name multi-segment2-subnet [net-id] 
10.1.1.0/24 --segment_id [segment-id]
 "


At last, I want to create a port with host_id. My local environment contains 
two compute nodes, one is named "devstack"  and the other is "devstack2". So I 
use this command "  neutron port-create --binding:host_id=devstack [net-id] ". 


Exception happens in neutron server. The exception says "


Host devstack is not connected to any segments on routed provider network 
[net-id].  It should be connected to one." I can not get this exact point about 
this exception. 


Why does the "routed network" have relationship with host? 


How they work together between "host info (compute node ?)" and "routed 
network"?


What should I do if I want to get rid of this exception?




Hope for your reply. Especially Carl and Miguel. ;-)






Many Thanks
Zhi Chang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 回复: [neutron]dragonflow deployment incorrectness

2016-07-27 Thread shihanzhang
As I know,  now dragonflow still use neutron l3-agent for snat, so the l3-agent 
is enabled and router namespace  be created.




在 2016-07-28 08:53:20,kangjingt...@sina.com 写道:

Hi 


The reason why a namepspace be created while creating router is just because 
l3-agent is enabled. You can disable it as you want to.


At present, there is only dhcp app in dragonflow need packets from datapath, so 
you will observe controller related flows after you create vms in subnet where 
dhcp is enabled. 


BTW, if you want the all flows showed in br-int, including hidden flows, you 
can use command “ovs-appctl bridge/dump-flows br-int”


- 原始邮件 -
发件人:"郑杰" 
收件人:"openstack-dev" 
主题:[openstack-dev] [neutron]dragonflow deployment incorrectness
日期:2016年07月27日 14点46分




Hi ,everybody:

when I deploy DragonFlow, I start neutron-server ,df-local-controller and 
df-l3-router, but there still are router namespace when I create routers ,I 
check ovs flow table ,there is no CONTROLLER actions for ip traffic ,so I guess 
something wrong with my conf.
below are my neutron.conf and l3_agent.ini respectively

neutron.conf-
[DEFAULT]
dhcp_agent_notification = False
advertise_mtu = True
api_workers = 10
notify_nova_on_port_data_changes = True
notify_nova_on_port_status_changes = True
auth_strategy = keystone
allow_overlapping_ips = True
debug = True
service_plugins =
core_plugin = dragonflow.neutron.plugin.DFPlugin
transport_url = rabbit://stackrabbit:secret@172.16.18.127:5672/
logging_exception_prefix = %(color)s%(asctime)s.%(msecs)03d TRACE %(name)s 
^[[01;35m%(instance)s^[[00m
logging_debug_format_suffix = ^[[00;33mfrom (pid=%(process)d) %(funcName)s 
%(pathname)s:%(lineno)d^[[00m
logging_default_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s 
%(name)s [^[[00;36m-%(color)s] ^[[01;35m%(instance)s%(color)s%(message)s^[[00m
logging_context_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s 
%(name)s [^[[01;36m%(request_id)s ^[[00;36m%(user_name)s 
%(project_id)s%(color)s] ^[[01;35m%(instance)s%(color)s%(message)s^[[00m
bind_host = 0.0.0.0
use_syslog = False
state_path = /opt/stack/data/neutron

[df]
pub_sub_driver = redis_db_pubsub_driver
enable_selective_topology_distribution = True
publisher_rate_limit_count = 1
publisher_rate_limit_timeout = 180
pub_sub_use_multiproc = False
enable_df_pub_sub = True
monitor_table_poll_time = 30
apps_list = 
l2_app.L2App,l3_proactive_app.L3ProactiveApp,dhcp_app.DHCPApp,dnat_app.DNATApp,sg_app.SGApp,portsec_app.PortSecApp
integration_bridge = br-int
tunnel_type = geneve
local_ip = 172.16.18.127
nb_db_class = dragonflow.db.drivers.redis_db_driver.RedisDbDriver
remote_db_hosts = 172.16.18.127:4001
remote_db_port = 4001
remote_db_ip = 172.16.18.127

[df_l2_app]
l2_responder = True

[df_dnat_app]
ex_peer_patch_port = patch-int
int_peer_patch_port = patch-ex
external_network_bridge = br-ex

-l3_agent.ini 
-
[DEFAULT]
l3_agent_manager = neutron.agent.l3_agent.L3NATAgentWithStateReport
external_network_bridge = br-ex
interface_driver = openvswitch
ovs_use_veth = False
debug = True

[AGENT]
root_helper_daemon = sudo /usr/bin/neutron-rootwrap-daemon 
/etc/neutron/rootwrap.conf
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

--ovs-ofctl dump-flows br-int---

NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=1710.883s, table=0, n_packets=0, n_bytes=0, 
idle_age=1710, priority=100,in_port=19 
actions=load:0x17->NXM_NX_REG6[],load:0x3->OXM_OF_METADATA[],resubmit(,1)
 cookie=0x0, duration=1710.883s, table=0, n_packets=1268, n_bytes=53760, 
idle_age=0, priority=100,in_port=13 
actions=load:0x15->NXM_NX_REG6[],load:0x3->OXM_OF_METADATA[],resubmit(,1)
 cookie=0x0, duration=1710.876s, table=0, n_packets=0, n_bytes=0, 
idle_age=1710, priority=100,in_port=15 
actions=load:0x1a->NXM_NX_REG6[],load:0x2->OXM_OF_METADATA[],resubmit(,1)
 cookie=0x0, duration=1710.876s, table=0, n_packets=0, n_bytes=0, 
idle_age=1710, priority=100,in_port=20 
actions=load:0x1c->NXM_NX_REG6[],load:0x3->OXM_OF_METADATA[],resubmit(,1)
 cookie=0x0, duration=1710.876s, table=0, n_packets=3, n_bytes=126, 
idle_age=643, priority=100,in_port=18 
actions=load:0x1b->NXM_NX_REG6[],load:0x2->OXM_OF_METADATA[],resubmit(,1)
 cookie=0x0, duration=1710.883s, table=0, n_packets=0, n_bytes=0, 
idle_age=1710, priority=100,tun_id=0x17 
actions=load:0x17->NXM_NX_REG7[],load:0x3->OXM_OF_METADATA[],resubmit(,72)
 cookie=0x0, duration=1710.882s, table=0, n_packets=0, n_bytes=0, 
idle_age=1710, priority=100,tun_id=0x15 
actions=load:0x15->NXM_NX_REG7[],load:0x3->OXM_OF_METADATA[],resubmit(,72)
 cookie=0x0, duration=1710.876s, table=0, n_packets=0, n_bytes=0, 
idle_age=1710, priority=100,tun_id=0x1a 
actions=load:0x1a->NXM_NX_REG7[],load:0x2->OXM_OF_METADATA[],resubmit(,72)
 cookie=0x0, duration=1710.876s, table=0, n_packets=0, n_bytes=0,

Re: [openstack-dev] [neutron][SFC]

2016-06-07 Thread shihanzhang
Hi Alioune and Cathy,
 For devstack on ubuntu14.04, the default ovs version is 2.0.2, so there 
was the error as Alioune said.
 Do we need to install speical ovs version in networking-sfc devstack 
plugin.sh?






在 2016-06-07 07:48:26,"Cathy Zhang"  写道:


Hi Alioune,

 

Which OVS version are you using?

Try openvswitch version 2.4.0 and restart the openvswitch-server before 
installing the devstack.

 

Cathy

 

From: Alioune [mailto:baliou...@gmail.com]
Sent: Friday, June 03, 2016 9:07 AM
To:openstack-dev@lists.openstack.org
Cc: Cathy Zhang
Subject: [openstack-dev][neutron][SFC]

 

Probleme with OpenStack SFC

Hi all, 

I've installed Openstack SFC with devstack and all module are corretly running 
except the neutron L2-agent

 

After a "screen -rd", it seems that there is a conflict between l2-agent and 
SFC (see trace bellow).

I solved the issue with "sudo ovs-vsctl set bridge br 
protocols=OpenFlow10,OpenFlow11,OpenFlow12,OpenFlow13" on all openvswitch 
bridge (br-int, br-ex, br-tun and br-mgmt0).

I would like to know:

  - If someone knows why this error arrises ?

 - is there another way to solve it ?

 

Regards,

 

2016-06-03 12:51:56.323 WARNING 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] OVS is dead. 
OVSNeutronAgent will keep running and checking OVS status periodically.

2016-06-03 12:51:56.330 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Agent rpc_loop - 
iteration:4722 completed. Processed ports statistics: {'regular': {'updated': 
0, 'added': 0, 'removed': 0}}. Elapsed:0.086 from (pid=12775) 
loop_count_and_wait 
/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1680

2016-06-03 12:51:58.256 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Agent rpc_loop - 
iteration:4723 started from (pid=12775) rpc_loop 
/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1732

2016-06-03 12:51:58.258 DEBUG neutron.agent.linux.utils 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Running command (rootwrap 
daemon): ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23'] 
from (pid=12775) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:101

2016-06-03 12:51:58.311 ERROR neutron.agent.linux.utils 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None]

Command: ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23']

Exit code: 1

Stdin:

Stdout:

Stderr: 
2016-06-03T12:51:58Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: 
version negotiation failed (we support version 0x04, peer supports version 0x01)

ovs-ofctl: br-int: failed to connect to socket (Broken pipe)

 

2016-06-03 12:51:58.323 ERROR networking_sfc.services.sfc.common.ovs_ext_lib 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None]

Command: ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23']

Exit code: 1

Stdin:

Stdout:

Stderr: 
2016-06-03T12:51:58Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: 
version negotiation failed (we support version 0x04, peer supports version 0x01)

ovs-ofctl: br-int: failed to connect to socket (Broken pipe)

 

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Traceback (most recent call last):

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib   
File 
"/opt/stack/networking-sfc/networking_sfc/services/sfc/common/ovs_ext_lib.py", 
line 125, in run_ofctl

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib
 process_input=process_input)

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib   
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 159, in execute

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib
 raise RuntimeError(m)

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
RuntimeError:

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Command: ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23']

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Exit code: 1

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Stdin:

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Stdout:

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Stderr: 
2016-06-03T12:51:58Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: 
version negotiation failed (we support version 0x04, peer supports version 0x01)

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
ovs-ofctl: br-int: failed to connect to socket (Broken pipe)

2016-06-03 12:51:58.323 TRACE networking_sfc.

Re: [openstack-dev] [Neutron] [Dragonflow] Support configuration of DB clusters

2015-12-27 Thread shihanzhang

good suggestion!



At 2015-12-25 19:07:10, "Li Ma"  wrote:
>Hi all, currently, we only support db_ip and db_port in the
>configuration file. Some DB SDK supports clustering, like Zookeeper.
>You can specify a list of nodes when client application starts to
>connect to servers.
>
>I'd like to implement this feature, specifying ['ip1:port',
>'ip2:port', 'ip3:port'] list in the configuration file. If only one
>server exists, just set it to ['ip1:port'].
>
>Any suggestions?
>
>-- 
>
>Li Ma (Nick)
>Email: skywalker.n...@gmail.com
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] KILO: neutron port-update --allowed-address-pairs action=clear throws an exception

2015-09-27 Thread shihanzhang
I don't see any exception using bellow command


root@szxbz:/opt/stack/neutron# neutron port-update 
3748649e-243d-4408-a5f1-8122f1fbf501 --allowed-address-pairs action=clear
Allowed address pairs must be a list.




At 2015-09-28 14:36:44, "masoom alam"  wrote:

stable KILO


shall I checkout the latest code are you saying this...Also can you please 
confirm if you have tested this thing at your endand there was no problem...




Thanks


On Sun, Sep 27, 2015 at 11:29 PM, shihanzhang  wrote:

which branch do you use?  there is not this problem in master branch.






At 2015-09-28 13:43:05, "masoom alam"  wrote:

Can anybody highlight why the following command is throwing an exception:


Command# neutron port-update db3113df-14a3-4d6d-a3c5-d0517a134fc3 
--allowed-address-pairs action=clear


Error:  2015-09-27 21:44:32.144 ERROR neutron.api.v2.resource 
[req-b1cbe1f2-ba21-4337-a714-f337c54ee9fc admin None] update failed
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 515, in update
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource 
allow_bulk=self._allow_bulk)
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 652, in prepare_request_body
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource 
attr_vals['validate'][rule])
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/extensions/allowedaddresspairs.py", line 51, in 
_validate_allowed_address_pairs
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource if len(address_pairs) 
> cfg.CONF.max_allowed_address_pair:
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource TypeError: object of type 
'NoneType' has no len()
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource






There is a similar bug filed at Lauchpad for Havana 
https://bugs.launchpad.net/juniperopenstack/+bug/1351979 .However there is no 
fix and the work around  - using curl, mentioned on the bug is also not working 
for KILO...it was working for havana and Icehouseany pointers...?


Thanks








网易考拉iPhone6s玫瑰金5288元,现货不加价


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] KILO: neutron port-update --allowed-address-pairs action=clear throws an exception

2015-09-27 Thread shihanzhang
which branch do you use?  there is not this problem in master branch.





At 2015-09-28 13:43:05, "masoom alam"  wrote:

Can anybody highlight why the following command is throwing an exception:


Command# neutron port-update db3113df-14a3-4d6d-a3c5-d0517a134fc3 
--allowed-address-pairs action=clear


Error:  2015-09-27 21:44:32.144 ERROR neutron.api.v2.resource 
[req-b1cbe1f2-ba21-4337-a714-f337c54ee9fc admin None] update failed
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 515, in update
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource 
allow_bulk=self._allow_bulk)
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 652, in prepare_request_body
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource 
attr_vals['validate'][rule])
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/extensions/allowedaddresspairs.py", line 51, in 
_validate_allowed_address_pairs
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource if len(address_pairs) 
> cfg.CONF.max_allowed_address_pair:
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource TypeError: object of type 
'NoneType' has no len()
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource






There is a similar bug filed at Lauchpad for Havana 
https://bugs.launchpad.net/juniperopenstack/+bug/1351979 .However there is no 
fix and the work around  - using curl, mentioned on the bug is also not working 
for KILO...it was working for havana and Icehouseany pointers...?


Thanks



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Port Forwarding API

2015-09-20 Thread shihanzhang


 2) The same FIP address can be used for different mappings, for example 
FIP with IP X

  can be used with different ports to map to different VM's X:4001  -> 
VM1 IP

  X:4002 -> VM2 IP (This is the essence of port forwarding).

 So we also need the port mapping configuration fields


For the second use case, I have a question, does it support DVR?  if VM1 and 
VM2 are on
different compute nodes, how does it work?





在 2015-09-20 14:26:23,"Gal Sagie"  写道:

Hello All,


I have sent a spec [1] to resume the work on port forwarding API and reference 
implementation.


Its currently marked as "WIP", however i raised some "TBD" questions for the 
community.

The way i see port forwarding is an API that is very similar to floating IP API 
and implementation

with few changes:


1) Can only define port forwarding on the router external gateway IP (or 
additional public IPs

   that are located on the router.  (Similar to the case of centralized DNAT)


2) The same FIP address can be used for different mappings, for example FIP 
with IP X

can be used with different ports to map to different VM's X:4001  -> VM1 IP 
  

X:4002 -> VM2 IP (This is the essence of port forwarding).

So we also need the port mapping configuration fields


All the rest should probably behave (in my opinion) very similar to FIP's (for 
example

not being able to remove external gateway if port forwarding entries are 
configured,

if the VM is deletd the port forwarding entry is deleted as well and so on..)

All of these points are mentioned in the spec and i am waiting for the 
community feedback

on them.


I am trying to figure out if implementation wise, it would be smart to try and 
use the floating IP

implementation and extend it for this (given all the above mechanism described 
above already

works for floating IP's)

Or, add another new implementation which behaves very similar to floating IP's 
in most aspects

(But still differ in some)

Or something else...


Would love to hear the community feedback on the spec, even that its WIP


Thanks
Gal.

[1] https://review.openstack.org/#/c/224727/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3][QA] DVR job failure rate and maintainability

2015-09-15 Thread shihanzhang
Sean, 
Thank you very much for writing this, DVR indeed need to get more attention, 
it's a very cool and usefull feature, especially in large-scale. In Juno, it 
firstly lands to Neutron, through the development of Kilo and Liberty, it's 
getting better and better, we have used it in our production,
in the process of use, we found the following bugs have not been fixed, we have 
filed bug on launchpad:
1. every time we create a VM, it will trigger router scheduling, in 
large-scale, if there are lage l3 agents bind to a DVR router, scheduling 
router consume much time, but scheduling action is not necessary.[1]
2. every time we bind a VM with floatingIP, it also trigger router scheduling, 
and send this floatingIP to all bound
l3 agents.[2]
3. Bulk delete VMs from a compute node which has no VM on this router, for most 
part, the router namespace will remain.[3]
4. Updating router_gateway trigger reschedule_router, during reschedule_router, 
the communication is broken related to this router, for DVR router, why router 
need to reschedule_router? it reschedule which l3 agents? [4]
5. Stale fip namespaces are not cleaned up on compute nodes. [5]


I very agree with that we need a group of contributors that
can help with the DVR feature in the immediate term to fix the current bugs.
I am very glad to join this group.


Neutroner, let's start to do the great things!


Thanks,
Hanzhang,Shi


[1] https://bugs.launchpad.net/neutron/+bug/1486795
[2]https://bugs.launchpad.net/neutron/+bug/1486828
[3] https://bugs.launchpad.net/neutron/+bug/1496201
[4] https://bugs.launchpad.net/neutron/+bug/1496204
[5] https://bugs.launchpad.net/neutron/+bug/1470909







At 2015-09-15 06:01:03, "Sean M. Collins"  wrote:
>[adding neutron tag to subject and resending]
>
>Hi,
>
>Carl Baldwin, Doug Wiegley, Matt Kassawara, Ryan Moats, and myself are
>at the QA sprint in Fort Collins. Earlier today there was a discussion
>about the failure rate about the DVR job, and the possible impact that
>it is having on the gate.
>
>Ryan has a good patch up that shows the failure rates over time:
>
>https://review.openstack.org/223201
>
>To view the graphs, you go over into your neutron git repo, and open the
>.html files that are present in doc/dashboards - which should open up
>your browser and display the Graphite query.
>
>Doug put up a patch to change the DVR job to be non-voting while we
>determine the cause of the recent spikes:
>
>https://review.openstack.org/223173
>
>There was a good discussion after pushing the patch, revolving around
>the need for Neutron to have DVR, to fit operational and reliability
>requirements, and help transition away from Nova-Network by providing
>one of many solutions similar to Nova's multihost feature.  I'm skipping
>over a huge amount of context about the Nova-Network and Neutron work,
>since that is a big and ongoing effort. 
>
>DVR is an important feature to have, and we need to ensure that the job
>that tests DVR has a high pass rate.
>
>One thing that I think we need, is to form a group of contributors that
>can help with the DVR feature in the immediate term to fix the current
>bugs, and longer term maintain the feature. It's a big task and I don't
>believe that a single person or company can or should do it by themselves.
>
>The L3 group is a good place to start, but I think that even within the
>L3 team we need dedicated and diverse group of people who are interested
>in maintaining the DVR feature. 
>
>Without this, I think the DVR feature will start to bit-rot and that
>will have a significant impact on our ability to recommend Neutron as a
>replacement for Nova-Network in the future.
>
>-- 
>Sean M. Collins
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] netaddr and abbreviated CIDR format

2015-08-22 Thread shihanzhang
there was another patch [1] fix the invalid CIDR for subnet.
 
thanks,
   hanzhang, shi


[1] https://review.openstack.org/#/c/201942/


At 2015-08-22 03:33:26, "Jay Pipes"  wrote:
>On 08/21/2015 02:34 PM, Sean M. Collins wrote:
>> So - the tl;dr is that I don't think that we should accept inputs like
>> the following:
>>
>> x   -> 192
>> x/y -> 10/8
>> x.x/y   -> 192.168/16
>> x.x.x/y -> 192.168.0/24
>>
>> which are equivalent to::
>>
>> x.0.0.0/y   -> 192.0.0.0/24
>> x.0.0.0/y   -> 10.0.0.0/8
>> x.x.0.0/y   -> 192.168.0.0/16
>> x.x.x.0/y   -> 192.168.0.0/24
>
>Agreed completely.
>
>Best,
>-jay
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Expected cli behavior when ovs-agent is down

2015-08-22 Thread shihanzhang
hi Vikas Choudhary, when ovs-agent service recover(ovs-agent process restart), 
the dhcp port will not re-binding successfully?





At 2015-08-22 14:26:08, "Vikas Choudhary"  wrote:

Hi Everybody,


I want to discuss on https://bugs.launchpad.net/neutron/+bug/1348589.This is 
there for more than a year and no discussion i could find on this.


Scenario:
ovs-agent is down and then a network and subnet under this newly created 
network are created using cli. No error visible to user, but following 
irregularities are found.


Discrepancies:
1. neutron port-show  shows:
 binding:vif_type  | binding_failed
2. Running "ovs-ofctl dump-flows br-tun", no of-flow got added
3. Running "ovs-vsctl show br-int", no tag for dhcp-port. 


neutron db will have all the attributes required to retry vif binding.My query 
is when should we trigger this rebinding.Two approaches i could think of are:
1> At neutron server restart, for all ports with vif_type as  "binding_failed" 
plugins/ml2/drivers/mech_agent.bind_port can be invoked as a sync up activity.


2> In neutron port update api, 
http://developer.openstack.org/api-ref-networking-v2-ext.html , could be 
enhanced to receive vif binding related options also and then eventually 
plugins/ml2/drivers/mech_agent.bind_port can be invoked.Corresponding changes 
will be made to 'port update cli' also.


Please suggest.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VPNaaS and DVR compatibility

2015-08-06 Thread shihanzhang
I have same question, I have filed a bug on launchpad: 
https://bugs.launchpad.net/neutron/+bug/1476469, 
who can help to clarify it?
Thanks,
Hanzhang, shi 






At 2015-08-05 00:33:05, "Sergey Kolekonov"  wrote:

Hi,


I'd like to clarify a situation around VPNaaS and DVR compatibility in Neutron.
In non-DVR case VMs use a network node to access each other and external 
network.
So with VPNaaS enabled we just have additional setup steps performed on network 
nodes to establish VPN connection between VMs.
With DVR enabled two VMs from different networks (or even clouds) should still 
reach each other through network nodes, but if floating IPs are assigned, this 
doesn't work.
So my question is: is it expected and if yes are there any plans to add full 
support for VPNaaS on DVR-enabled clusters?


Thank you.
--

Regards,
Sergey Kolekonov__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Subnet's dns nameserver doesn't orderby input

2015-07-06 Thread shihanzhang
hi, Zhi Chang, 
   this link: #https://bugs.launchpad.net/neutron/+bug/1218629 is ok.




At 2015-07-06 17:13:12, "Zhi Chang"  wrote:

Thanks for your reply. Could you send the html link again? This does maybe 
doesn't exist.


Thx
Zhi Chang
 
 
-- Original --
From:  "Oleg Bondarev";
Date:  Mon, Jul 6, 2015 04:50 PM
To:  "OpenStack Development Mailing List (not for usage 
questions)";
Subject:  Re: [openstack-dev] [Neutron]Subnet's dns nameserver doesn't orderby 
input
 
Hi,


Currently there is no dns servers prioritization for subnets in Neutron.

There is an opened bug for this: https://bugs.launchpad.net/neutron/+bug/1218629


Thanks,
Oleg


On Mon, Jul 6, 2015 at 11:21 AM, Zhi Chang  wrote:

Hi, all
Subnet's nameserver is out of order. That is to say, cmd "neutron 
subnet-update [subnet-id] dns_nameservers list=true 1.2.3.4 5.6.7.8" and 
"neutron subnet-update [subnet-id] dns_nameservers list=true 5.6.7.8 
1.2.3.4" is same. I think that we often have two or more dns nameservers, one 
is a main nameserver, another is a backup. Therefore, I think we should make 
difference of the above command. 
Does anyone have ideas?


Thx
Zhi Chang



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue with neutron-dhcp-agent not recovering known ports cache after restart

2015-06-30 Thread shihanzhang
hi Shraddha Pandhe,
  I think your analysis is right, I also encountered the same problem, I have 
filed a bug[1] and commit a patch [2] for this bug.


thanks,
  hanzhang shi


[1] https://launchpad.net/bugs/1469615
[2] https://review.openstack.org/#/c/196927/



在 2015-07-01 08:25:48,"Shraddha Pandhe"  写道:

Hi folks..


I have a question about neutron dhcp agent restart scenario. It seems like, 
when the agent restarts, it recovers the known network IDs in cache, but we 
don't recover the known ports [1].


So if a port that was present before agent restarted, is deleted after agent 
restart, the agent wont have it in its cache. So port here [2] will be None. So 
the port will actually never get deleted. 


Same problem will happen if a port is updated. Has anyone seen these issues? Am 
I missing something?


[1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L82-L87
[2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L349

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs in Liberty?

2015-05-11 Thread shihanzhang
Hi Eric,


Huawei is also interested in this BP, Hope it can be discussed during the 
design summit.


Thanks,
shihanzhang




在 2015-05-12 08:23:07,"Karthik Natarajan"  写道:


Hi Eric,

 

Brocade is also interested in the VLAN aware VM’s BP. Let’s discuss it during 
the design summit.

 

Thanks,

Karthik

 

From: Bob Melander (bmelande) [mailto:bmela...@cisco.com]
Sent: Monday, May 11, 2015 9:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware 
VMs in Liberty?

 

Hi Eric,

 

Cisco is also interested in the kind of VLAN trunking feature that your 
VLAN-aware VM’s BP describe. If this could be achieved in Liberty it’d be great.

Perhaps your BP could be brought up during one of the Neutron sessions in 
Vancouver, e.g., the one on OVN since there seems to be some similarities?

 

Thanks

Bob

 

 

From: Erik Moe 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: fredag 8 maj 2015 06:29
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware 
VMs in Liberty?

 

 

Hi,

 

I have not been able to work with upstreaming of this for some time now. But 
now it looks like I may make another attempt. Who else is interested in this, 
as a user or to help contributing? If we get some traction we can have an IRC 
meeting sometime next week.

 

Thanks,

Erik

 

 

From: Scott Drennan [mailto:sco...@nuagenetworks.net]
Sent: den 4 maj 2015 18:42
To:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs 
in Liberty?

 

VLAN-transparent or VLAN-trunking networks have landed in Kilo, but I don't see 
any work on VLAN-aware VMs for Liberty.  There is a blueprint[1] and specs[2] 
which was deferred from Kilo - is this something anyone is looking at as a 
Liberty candidate?  I looked but didn't find any recent work - is there 
somewhere else work on this is happening?  No-one has listed it on the liberty 
summit topics[3] etherpad, which could mean it's uncontroversial, but given 
history on this, I think that's unlikely.

 

cheers,

Scott

 

[1]: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms

[2]: https://review.openstack.org/#/c/94612

[3]: https://etherpad.openstack.org/p/liberty-neutron-summit-topics__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-22 Thread shihanzhang
+1 to deprecate this option


At 2015-03-21 02:57:09, "Assaf Muller"  wrote:
>Hello everyone,
>
>The use_namespaces option in the L3 and DHCP Neutron agents controls if you
>can create multiple routers and DHCP networks managed by a single L3/DHCP 
>agent,
>or if the agent manages only a single resource.
>
>Are the setups out there *not* using the use_namespaces option? I'm curious as
>to why, and if it would be difficult to migrate such a setup to use namespaces.
>
>I'm asking because use_namespaces complicates Neutron code for what I gather
>is an option that has not been relevant for years. I'd like to deprecate the 
>option
>for Kilo and remove it in Liberty.
>
>
>Assaf Muller, Cloud Networking Engineer
>Red Hat
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Question about VPNaas

2015-01-27 Thread shihanzhang
Hi Stacker:


I am a novice, I want  use Neutron VPNaas, through my preliminary 
understanding on this it, I have two questions about it:
(1) why a 'vpnservices' can just has one subnet?
(2) why the subnet of 'vpnservices' can't be changed?
 As I know, the OpenSwan does not has these limitations.
I've learned that there is a BP to do this:
 https://blueprints.launchpad.net/neutron/+spec/vpn-multiple-subnet
 but this BP has been no progress.
 I want to know whether this will do in next cycle or later, who can help 
me to explain? 



Thanks.

-shihanzhang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sriov] PciDeviceRequestFailed error

2014-12-07 Thread shihanzhang
I think the problem is in nova, can you show your "pci_passthrough_whitelist" 
in nova.conf?






At 2014-12-04 18:26:21, "Akilesh K"  wrote:

Hi,
I am using neutron-plugin-sriov-agent.


I have configured pci_whitelist  in nova.conf


I have configured ml2_conf_sriov.ini.


But when I launch instance I get the exception in subject.


On further checking with the help of some forum messages, I discovered that 
pci_stats are empty.
mysql>  select hypervisor_hostname,pci_stats from compute_nodes;
+-+---+
| hypervisor_hostname | pci_stats |
+-+---+
| openstack | []|
+-+---+
1 row in set (0.00 sec)



Further to this I found that PciDeviceStats.pools is an empty list too.


Can anyone tell me what I am missing.



Thank you,
Ageeleshwar K
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS/Security groups Not blocking ongoing traffic

2014-10-27 Thread shihanzhang
I also agree file a new bug for FWaaS








At 2014-10-28 00:09:29, "Carl Baldwin"  wrote:
>I think I'd suggest opening a new bug for FWaaS since it is a
>different component with different code.  It doesn't seem natural to
>extend the scope of this bug to include it.
>
>Carl
>
>On Mon, Oct 27, 2014 at 9:50 AM, Itzik Brown  wrote:
>>
>> - Original Message -
>>> From: "Carl Baldwin" 
>>> To: "OpenStack Development Mailing List (not for usage questions)" 
>>> 
>>> Sent: Monday, October 27, 2014 5:27:57 PM
>>> Subject: Re: [openstack-dev] [Neutron] FWaaS/Security groups Not blocking 
>>> ongoing traffic
>>>
>>> On Mon, Oct 27, 2014 at 6:34 AM, Simon Pasquier 
>>> wrote:
>>> > Hello Itzik,
>>> > This has been discussed lately on this ML. Please see
>>> > https://bugs.launchpad.net/neutron/+bug/1335375.
>>>
>>> This is a good example that any create, update, or delete of a SG rule
>>> can expose this issue.  This bug only mentions delete.  I'll update
>>> the bug to increase the scope beyond just deletes because it really is
>>> the same conntrack issue at the root of the problem.
>>>
>>> Carl
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> Carl,
>>
>> FWaaS has the same issues as well.
>> What do you suggest - open a new bug or updating the current one?
>>
>> Thanks,
>> Itzik
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread shihanzhang
Hi, Elena Ezhova, thanks for your work to this problem!
  I agree with your analysis, this why I commit this bug but don't submit patch 
for it.
  I have  want to use conntrack to solve this bug, but I also thought the 
problem you have said:
The problem here is that it is sometimes impossible to tell which 
connection should be killed. For example there may be two instances running in 
different namespaces that have the same ip addresses. As a  compute 
doesn't know anything about namespaces, it cannot distinguish between the two 
seemingly identical connections: 
 $ sudo conntrack -L  | grep "10.0.0.5"
 tcp  6 431954 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 
sport=60723 dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60723 [ASSURED] 
mark=0 use=1
 tcp  6 431976 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 
sport=60729 dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60729 [ASSURED] 
mark=0 use=1


1. I think this problem is due to that in compute node, all tenants  instances 
(if use ovs agent, it use vlan to isolate different tenant instance) are in 
same namespace, so it can't distinguish the connection as above use case.
2. the ip_conntrack works above L3, so it can't  search for a connection by 
destination MAC


I am not clear as ajo said:
  I'm not sure if removing all the conntrack rules that match the certain 
filter would be OK enough, as it may only lead to full reevaluation of rules 
for the next packet of the cleared connections (may be I'm missing some corner 
detail, which could be).








在 2014-10-23 18:22:46,"Elena Ezhova"  写道:

Hi!


I am working on a bug "ping still working once connected even after related 
security group rule is deleted" 
(https://bugs.launchpad.net/neutron/+bug/1335375). The gist of the problem is 
the following: when we delete a security group rule the corresponding rule in 
iptables is also deleted, but the connection, that was allowed by that rule, is 
not being destroyed.
The reason for such behavior is that in iptables we have the following 
structure of a chain that filters input packets for an interface of an istance:


Chain neutron-openvswi-i830fa99f-3 (1 references)
 pkts bytes target prot opt in out source   destination 

0 0 DROP   all  --  *  *   0.0.0.0/00.0.0.0/0   
 state INVALID /* Drop packets that are not associated with a state. */
0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0   
 state RELATED,ESTABLISHED /* Direct packets associated with a known 
session to the RETURN chain. */
0 0 RETURN udp  --  *  *   10.0.0.3 0.0.0.0/0   
 udp spt:67 dpt:68
0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0   
 match-set IPv43a0d3610-8b38-43f2-8 src
0 0 RETURN tcp  --  *  *   0.0.0.0/00.0.0.0/0   
 tcp dpt:22  < rule that allows ssh on port 22  
  
184 RETURN icmp --  *  *   0.0.0.0/00.0.0.0/0   

0 0 neutron-openvswi-sg-fallback  all  --  *  *   0.0.0.0/0 
   0.0.0.0/0/* Send unmatched traffic to the fallback chain. */


So, if we delete rule that allows tcp on port 22, then all connections that are 
already established won't be closed, because all packets would satisfy the 
rule: 
0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0   
 state RELATED,ESTABLISHED /* Direct packets associated with a known 
session to the RETURN chain. */



I seek advice on the way how to deal with the problem. There are a couple of 
ideas how to do it (more or less realistic):
Kill the connection using conntrack
  The problem here is that it is sometimes impossible to tell which 
connection should be killed. For example there may be two instances running in 
different namespaces that have the same ip addresses. As a compute doesn't know 
anything about namespaces, it cannot distinguish between the two seemingly 
identical connections: 
 $ sudo conntrack -L  | grep "10.0.0.5"
 tcp  6 431954 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 sport=60723 
dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60723 [ASSURED] mark=0 use=1
 tcp  6 431976 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 sport=60729 
dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60729 [ASSURED] mark=0 use=1


I wonder whether there is any way to search for a connection by destination MAC?
Delete iptables rule that directs packets associated with a known session to 
the RETURN chain
   It will force all packets to go through the full chain each time and 
this will definitely make the connection close. But this will strongly affect 
the performance. Probably there may be created a timeout after which this rule 
will be restored, but it is uncertain how long sho

Re: [openstack-dev] [OSSN 0020] Disassociating floating IPs does not terminate NAT connections with Neutron L3 agent

2014-09-16 Thread shihanzhang
Now there is already a bug:https://bugs.launchpad.net/neutron/+bug/1334926 for 
this problem, meanwhile the security group also has same problem, I have report 
a bug:
https://bugs.launchpad.net/neutron/+bug/1335375
 






在 2014-09-16 01:46:11,"Martinx - ジェームズ"  写道:

Hey stackers,


Let me ask something about this... Why not use Linux Conntrack Table at each 
Tenant Namespace (L3 Router) to detect which connections were
made/established over a Floating IP ?


Like this, on the Neutron L3 Router:


--
apt-get install conntrack


ip netns exec qrouter-09b72faa-a5ef-4a52-80b5-1dcbea23b1b6 conntrack -L | grep 
ESTABLISHED



tcp  6 431998 ESTABLISHED src=192.168.3.5 dst=193.16.15.250 sport=36476 
dport=8333 src=193.16.15.250 dst=187.1.93.67 sport=8333 dport=36476 [ASSURED] 
mark=0 use=1
--


Floating IP: 187.1.93.67
Instance IP: 192.168.3.5


http://conntrack-tools.netfilter.org/manual.html#conntrack






Or, as a workaround, right after removing the Floating IP, Neutron might insert 
a temporary firewall rule (for about 5~10 minutes?), to drop the connections of 
that previous "Floating IP + Instance IP couple"... It looks really ugly but, 
at least, it will make sure that nothing will pass right after removing a 
Floating IP... Effectively terminating (dropping) the NAT connections after 
disassociating a Floating IP... ;-)





Also, I think that NFTables can bring some light here... I truly believe that 
if OpenStack moves to a "NFTables_Driver", it will be much easier to: manage 
firewall rules, logging, counters, IDS/IPS, atomic replacements of rules, even 
NAT66... All under a single implementation... Maybe with some kind of 
"real-time connection monitoring"... I mean, with NFTables, it becomes easier 
to implement a firewall ruleset with a Intrusion Prevention System (IPS), take 
a look:


https://home.regit.org/2014/02/suricata-and-nftables/



So, if NFTables can make Suricata's life easier, why not give Suricata's power 
to Netutron L3 Router? Starting with a new NFTables_Driver... =)


I'm not an expert on NFTables but, from what I'm seeing, it perfectly fits in 
OpenStack, in fact, NFTables will make OpenStack better.


https://home.regit.org/2014/01/why-you-will-love-nftables/



Best!
Thiago


On 15 September 2014 20:49, Nathan Kinder  wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Disassociating floating IPs does not terminate NAT connections with
Neutron L3 agent
- ---

### Summary ###
Every virtual instance is automatically assigned a private IP address.
You may optionally assign public IP addresses to instances. OpenStack
uses the term "floating IP" to refer to an IP address (typically
public) that can be dynamically added to a running virtual instance.
The Neutron L3 agent uses Network Address Translation (NAT) to assign
floating IPs to virtual instances. Floating IPs can be dynamically
released from a running virtual instance but any active connections are
not terminated with this release as expected when using the Neutron L3
agent.

### Affected Services / Software ###
Neutron, Icehouse, Havana, Grizzly, Folsom

### Discussion ###
When creating a virtual instance, a floating IP address is not
allocated by default. After a virtual instance is created, a user can
explicitly associate a floating IP address to that instance. Users can
create connections to the virtual instance using this floating IP
address. Also, this floating IP address can be disassociated from any
running instance without shutting that instance down.

If a user initiates a connection using the floating IP address, this
connection remains alive and accessible even after the floating IP
address is released from that instance. This potentially violates
restrictive policies which are only being applied to new connections.
These policies are ignored for pre-existing connections and the virtual
instance remains accessible from the public network.

This issue is only known to affect Neutron when using the L3 agent.
Nova networking is not affected.

### Recommended Actions ###
There is unfortunately no easy way to detect which connections were
made over a floating IP address from a virtual instance, as the NAT is
performed at the Neutron router. The only safe way of terminating all
connections made over a floating IP address is to terminate the virtual
instance itself.

The following recommendations should be followed when using the Neutron
L3 agent:

- - Only attach a floating IP address to a virtual instance when that
instance should be accessible from networks outside the cloud.
- - Terminate or stop the instance along with disassociating the floating
IP address to ensure that all connections are closed.

The Neutron development team plans to address this issue in a future
version of Neutron.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0020
Original LaunchPad Bug : https://bugs.launchpad.net/neutron/+bug/1334926
OpenStack Security ML : openstack-secur...@l

Re: [openstack-dev] [neutron][security-groups] Neutron default security groups

2014-09-16 Thread shihanzhang
As I know there is no a way to disable default security groups, but I think 
this BP can solve this problem:
https://blueprints.launchpad.net/neutron/+spec/default-rules-for-default-security-group



在 2014-09-17 07:44:42,"Aaron Rosen"  写道:

Hi, 


Inline: 


On Tue, Sep 16, 2014 at 1:00 AM, Fawad Khaliq  wrote:

Folks,


I have had discussions with some folks individually about this but I would like 
bring this to a broader audience.


I have been playing with security groups and I see the notion of 'default' 
security group seems to create some nuisance/issues.


There are list of things I have noticed so far:
Tenant for OpenStack services (normally named service/services) also ends up 
having default security group. 
Port create operation ends up ensuring default security groups for all the 
tenants as this completely seems out of the context of the tenant the port 
operation takes place. (bug?) 
Race conditions where if system is stressed and Neutron tries to ensure the 
first default security group and in parallel another call comes, Neutron ends 
up trying to create multiple default security groups as the checks for 
duplicate groups are invalidated as soon as the call make past a certain point 
in code.

Right this is a bug. We should catch this foreign key constraint exception here 
and pass as the default security group was already created. We do something 
similar here when ports are created as there is a similar race for 
ip_allocation. 


API performance where orchestration chooses to spawn 1000 tenants and we see 
unnecessary overhead 
For plugins that use RESTful proxy backends require the backend systems to be 
up at the time neutron starts. [Minor, but affects some packaging solutions]


This is probably always a requirement for neutron to work so I don't think it's 
related. 


To summarize, is there a way to disable default security groups? Expected 
answer is no; can we introduce a way to disable it? If that does not make 
sense, should we go ahead and fix the issues around it? 


I think we should fix these bugs you pointed out.  


I am sure some of you must have seen some of these issues and solved them 
already. Please do share how do tackle these issues?


Thanks,

Fawad Khaliq



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-08-20 Thread shihanzhang
hi neutroner!
my patch about 
BP:https://blueprints.launchpad.net/openstack/?searchtext=add-ipset-to-security 
need install ipset in devstack, I have commit the 
patch:https://review.openstack.org/#/c/113453/, who can help me review it, 
thanks very much!


 Best regards,
shihanzhang





At 2014-08-21 10:47:59, "Martinx - ジェームズ"  wrote:

+1 "NFTablesDriver"!


Also, NFTables, AFAIK, improves IDS systems, like Suricata, for example: 
https://home.regit.org/2014/02/suricata-and-nftables/


Then, I'm wondering here... What benefits might come for OpenStack Nova / 
Neutron, if it comes with a NFTables driver, instead of the current IPTables?!


* Efficient Security Group design?
* Better FWaaS, maybe with NAT(44/66) support?
* Native support for IPv6, with the defamed NAT66 built-in, simpler "Floating 
IP" implementation, for both v4 and v6 networks under a single implementation 
(I don't like NAT66, I prefer a `routed Floating IPv6` version)?
* Metadata over IPv6 still using NAT(66) (I don't like NAT66), single 
implementation?
* Suricata-as-a-Service?!


It sounds pretty cool!   :-)



On 20 August 2014 23:16, Baohua Yang  wrote:

Great!
We met similar problems.
The current mechanisms produce too many iptables rules, and it's hard to debug.
Really look forward to seeing a more efficient security group design.



On Thu, Jul 10, 2014 at 11:44 PM, Kyle Mestery  
wrote:

On Thu, Jul 10, 2014 at 4:30 AM, shihanzhang  wrote:
>
> With the deployment 'nova + neutron + openvswitch', when we bulk create
> about 500 VM with a default security group, the CPU usage of neutron-server
> and openvswitch agent is very high, especially the CPU usage of openvswitch
> agent will be 100%, this will cause creating VMs failed.
>
> With the method discussed in mailist:
>
> 1) ipset optimization   (https://review.openstack.org/#/c/100761/)
>
> 3) sg rpc optimization (with fanout)
> (https://review.openstack.org/#/c/104522/)
>
> I have implement  these two scheme in my deployment,  when we again bulk
> create about 500 VM with a default security group, the CPU usage of
> openvswitch agent will reduce to 10%, even lower than 10%, so I think the
> iprovement of these two options are very efficient.
>
> Who can help us to review our spec?
>

This is great work! These are on my list of things to review in detail
soon, but given the Neutron sprint this week, I haven't had time yet.
I'll try to remedy that by the weekend.

Thanks!
Kyle


>Best regards,
> shihanzhang
>
>
>
>
>
> At 2014-07-03 10:08:21, "Ihar Hrachyshka"  wrote:
>>-BEGIN PGP SIGNED MESSAGE-
>>Hash: SHA512
>>
>>Oh, so you have the enhancement implemented? Great! Any numbers that
>>shows how much we gain from that?
>>
>>/Ihar
>>
>>On 03/07/14 02:49, shihanzhang wrote:
>>> Hi, Miguel Angel Ajo! Yes, the ipset implementation is ready, today
>>> I will modify my spec, when the spec is approved, I will commit the
>>> codes as soon as possilbe!
>>>
>>>
>>>
>>>
>>>
>>> At 2014-07-02 10:12:34, "Miguel Angel Ajo" 
>>> wrote:
>>>>
>>>> Nice Shihanzhang,
>>>>
>>>> Do you mean the ipset implementation is ready, or just the
>>>> spec?.
>>>>
>>>>
>>>> For the SG group refactor, I don't worry about who does it, or
>>>> who takes the credit, but I believe it's important we address
>>>> this bottleneck during Juno trying to match nova's scalability.
>>>>
>>>> Best regards, Miguel Ángel.
>>>>
>>>>
>>>> On 07/02/2014 02:50 PM, shihanzhang wrote:
>>>>> hi Miguel Ángel and Ihar Hrachyshka, I agree with you that
>>>>> split  the work in several specs, I have finished the work (
>>>>> ipset optimization), you can do 'sg rpc optimization (without
>>>>> fanout)'. as the third part(sg rpc optimization (with fanout)),
>>>>> I think we need talk about it, because just using ipset to
>>>>> optimize security group agent codes does not bring the best
>>>>> results!
>>>>>
>>>>> Best regards, shihanzhang.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> At 2014-07-02 04:43:24, "Ihar Hrachyshka" 
>>>>> wrote:
>>> On 02/07/14 10:12, Miguel Angel Ajo wrote:
>>>
>>>> Shihazhang,
>>>
>>>> I really believe we need the RPC refactor do

Re: [openstack-dev] [Neutron] Is network ordering of vNICs guaranteed?

2014-08-09 Thread shihanzhang
Hi Paul, as I know, nova can guarante the ordering of vNICS, can you provide 
the reproduceable test script for this, I am glad to test it








At 2014-08-10 01:16:16, "Jay Pipes"  wrote:
>Paul, does this friend of a friend have a reproduceable test script for 
>this?
>
>Thanks!
>-jay
>
>On 08/08/2014 04:42 PM, Kevin Benton wrote:
>> If this is true, I think the issue is not on Neutron side but the Nova
>> side.
>> Neutron just receives and handles individual port requests. It has no
>> notion of the order in which they are attached to the VM.
>>
>> Can you add the Nova tag to get some visibility to the Nova devs?
>>
>>
>> On Fri, Aug 8, 2014 at 11:32 AM, CARVER, PAUL > > wrote:
>>
>> I’m hearing “friend of a friend” that people have looked at the code
>> and determined that the order of networks on a VM is not guaranteed.
>> Can anyone confirm whether this is true? If it is true, is there any
>> reason why this is not considered a bug? I’ve never seen it happen
>> myself.
>>
>> __ __
>>
>> To elaborate, I’m being told that if you create some VMs with
>> several vNICs on each and you want them to be, for example:
>>
>> __ __
>>
>> __1)__Management Network
>>
>> __2)__Production Network
>>
>> __3)__Storage Network
>>
>> __ __
>>
>> You can’t count on all the VMs having eth0 connected to the
>> management network, eth1 on the production network, eth2 on the
>> storage network.
>>
>> __ __
>>
>> I’m being told that they will come up like that most of the time,
>> but sometimes you will see, for example, a VM might wind up with
>> eth0 connected to the production network, eth1 to the storage
>> network, and eth2 connected to the storage network (or some other
>> permutation.)
>>
>> __ __
>>
>> __ __
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Kevin Benton
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-07-02 Thread shihanzhang
hi Miguel Ángel and Ihar Hrachyshka,
I agree with you that split  the work in several specs, I have finished the 
work ( ipset optimization), you can do 'sg rpc optimization (without fanout)'.
as the third part(sg rpc optimization (with fanout)),  I think we need talk 
about it, because just using ipset to optimize security group agent codes does 
not bring the best results!


Best regards,
shihanzhang.










At 2014-07-02 04:43:24, "Ihar Hrachyshka"  wrote:
>-BEGIN PGP SIGNED MESSAGE-
>Hash: SHA512
>
>On 02/07/14 10:12, Miguel Angel Ajo wrote:
>> 
>> Shihazhang,
>> 
>> I really believe we need the RPC refactor done for this cycle, and
>> given the close deadlines we have (July 10 for spec submission and 
>> July 20 for spec approval).
>> 
>> Don't you think it's going to be better to split the work in
>> several specs?
>> 
>> 1) ipset optimization   (you) 2) sg rpc optimization (without
>> fanout) (me) 3) sg rpc optimization (with fanout) (edouard, you ,
>> me)
>> 
>> 
>> This way we increase the chances of having part of this for the
>> Juno cycle. If we go for something too complicated is going to take
>> more time for approval.
>> 
>
>I agree. And it not only increases chances to get at least some of
>those highly demanded performance enhancements to get into Juno, it's
>also "the right thing to do" (c). It's counterproductive to put
>multiple vaguely related enhancements in single spec. This would dim
>review focus and put us into position of getting 'all-or-nothing'. We
>can't afford that.
>
>Let's leave one spec per enhancement. @Shihazhang, what do you think?
>
>> 
>> Also, I proposed the details of "2", trying to bring awareness on
>> the topic, as I have been working with the scale lab in Red Hat to
>> find and understand those issues, I have a very good knowledge of
>> the problem and I believe I could make a very fast advance on the
>> issue at the RPC level.
>> 
>> Given that, I'd like to work on this specific part, whether or not
>> we split the specs, as it's something we believe critical for 
>> neutron scalability and thus, *nova parity*.
>> 
>> I will start a separate spec for "2", later on, if you find it ok, 
>> we keep them as separate ones, if you believe having just 1 spec
>> (for 1 & 2) is going be safer for juno-* approval, then we can
>> incorporate my spec in yours, but then "add-ipset-to-security" is
>> not a good spec title to put all this together.
>> 
>> 
>> Best regards, Miguel Ángel.
>> 
>> 
>> On 07/02/2014 03:37 AM, shihanzhang wrote:
>>> 
>>> hi Miguel Angel Ajo Pelayo! I agree with you and modify my spes,
>>> but I will also optimization the RPC from security group agent to
>>> neutron server. Now the modle is 'port[rule1,rule2...], port...',
>>> I will change it to 'port[sg1, sg2..]', this can reduce the size
>>> of RPC respose message from neutron server to security group
>>> agent.
>>> 
>>> At 2014-07-01 09:06:17, "Miguel Angel Ajo Pelayo" 
>>>  wrote:
>>>> 
>>>> 
>>>> Ok, I was talking with Édouard @ IRC, and as I have time to
>>>> work into this problem, I could file an specific spec for the
>>>> security group RPC optimization, a masterplan in two steps:
>>>> 
>>>> 1) Refactor the current RPC communication for 
>>>> security_groups_for_devices, which could be used for full
>>>> syncs, etc..
>>>> 
>>>> 2) Benchmark && make use of a fanout queue per security group
>>>> to make sure only the hosts with instances on a certain
>>>> security group get the updates as they happen.
>>>> 
>>>> @shihanzhang do you find it reasonable?
>>>> 
>>>> 
>>>> 
>>>> - Original Message -
>>>>> - Original Message -
>>>>>> @Nachi: Yes that could a good improvement to factorize the
>>>>>> RPC
>>>>> mechanism.
>>>>>> 
>>>>>> Another idea: What about creating a RPC topic per security
>>>>>> group (quid of the
>>>>> RPC topic
>>>>>> scalability) on which an agent subscribes if one of its
>>>>>> ports is
>>>>> associated
>>>>>> to the security group?
>>>>>> 
>>>>>> Regards, Édouar

Re: [openstack-dev] [neutron]Performance of security group

2014-07-01 Thread shihanzhang


hi Miguel Angel Ajo Pelayo!
I agree with you and modify my spes, but I will also optimization the RPC from 
security group agent to neutron server. 
Now the modle is 'port[rule1,rule2...], port...', I will change it to 
'port[sg1, sg2..]', this can reduce the size of RPC respose message from 
neutron server to security group agent. 
At 2014-07-01 09:06:17, "Miguel Angel Ajo Pelayo"  wrote:
>
>
>Ok, I was talking with Édouard @ IRC, and as I have time to work 
>into this problem, I could file an specific spec for the security
>group RPC optimization, a masterplan in two steps:
>
>1) Refactor the current RPC communication for security_groups_for_devices,
>   which could be used for full syncs, etc..
>
>2) Benchmark && make use of a fanout queue per security group to make
>   sure only the hosts with instances on a certain security group get
>   the updates as they happen.
>
>@shihanzhang do you find it reasonable?
>
>
>
>- Original Message -
>> - Original Message -
>> > @Nachi: Yes that could a good improvement to factorize the RPC mechanism.
>> > 
>> > Another idea:
>> > What about creating a RPC topic per security group (quid of the RPC topic
>> > scalability) on which an agent subscribes if one of its ports is associated
>> > to the security group?
>> > 
>> > Regards,
>> > Édouard.
>> > 
>> > 
>> 
>> 
>> Hmm, Interesting,
>> 
>> @Nachi, I'm not sure I fully understood:
>> 
>> 
>> SG_LIST [ SG1, SG2]
>> SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..
>> port[SG_ID1, SG_ID2], port2 , port3
>> 
>> 
>> Probably we may need to include also the
>> SG_IP_LIST = [SG_IP1, SG_IP2] ...
>> 
>> 
>> and let the agent do all the combination work.
>> 
>> Something like this could make sense?
>> 
>> Security_Groups = {SG1:{IPs:[],RULES:[],
>>SG2:{IPs:[],RULES:[]}
>>   }
>> 
>> Ports = {Port1:[SG1, SG2], Port2: [SG1]  }
>> 
>> 
>> @Edouard, actually I like the idea of having the agent subscribed
>> to security groups they have ports on... That would remove the need to
>> include
>> all the security groups information on every call...
>> 
>> But would need another call to get the full information of a set of security
>> groups
>> at start/resync if we don't already have any.
>> 
>> 
>> > 
>> > On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang < ayshihanzh...@126.com >
>> > wrote:
>> > 
>> > 
>> > 
>> > hi Miguel Ángel,
>> > I am very agree with you about the following point:
>> > >  * physical implementation on the hosts (ipsets, nftables, ... )
>> > --this can reduce the load of compute node.
>> > >  * rpc communication mechanisms.
>> > -- this can reduce the load of neutron server
>> > can you help me to review my BP specs?
>> > 
>> > 
>> > 
>> > 
>> > 
>> > 
>> > 
>> > At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo" < mangel...@redhat.com >
>> > wrote:
>> > >
>> > >  Hi it's a very interesting topic, I was getting ready to raise
>> > >the same concerns about our security groups implementation, shihanzhang
>> > >thank you for starting this topic.
>> > >
>> > >  Not only at low level where (with our default security group
>> > >rules -allow all incoming from 'default' sg- the iptable rules
>> > >will grow in ~X^2 for a tenant, and, the
>> > >"security_group_rules_for_devices"
>> > >rpc call from ovs-agent to neutron-server grows to message sizes of
>> > >>100MB,
>> > >generating serious scalability issues or timeouts/retries that
>> > >totally break neutron service.
>> > >
>> > >   (example trace of that RPC call with a few instances
>> > > http://www.fpaste.org/104401/14008522/ )
>> > >
>> > >  I believe that we also need to review the RPC calling mechanism
>> > >for the OVS agent here, there are several possible approaches to breaking
>> > >down (or/and CIDR compressing) the information we return via this api
>> > >call.
>> > >
>> > >
>> > >   So we have to look at two things here:
>> > >
>> > >  * physical implementation on the hosts (ipsets, nftables, ... )
>> > >

Re: [openstack-dev] [Neutron]One security issue about floating ip

2014-06-27 Thread shihanzhang
I think this problem also exist in security group!






At 2014-06-27 11:20:31, "stanzgy"  wrote:

I have filed this bug on nova
https://bugs.launchpad.net/nova/+bug/1334938




On Fri, Jun 27, 2014 at 10:19 AM, Yongsheng Gong  
wrote:

I have reported it on neutron project
https://bugs.launchpad.net/neutron/+bug/1334926




On Fri, Jun 27, 2014 at 5:07 AM, Vishvananda Ishaya  
wrote:
I missed that going in, but it appears that clean_conntrack is not done on
disassociate, just during migration. It sounds like we should remove the
explicit call in migrate, and just always call it from remove_floating_ip.

Vish

On Jun 26, 2014, at 1:48 PM, Brian Haley  wrote:

> Signed PGP part

> I believe nova-network does this by using 'conntrack -D -r $fixed_ip' when the
> floating IP goes away (search for clean_conntrack), Neutron doesn't when it
> removes the floating IP.  Seems like it's possible to close most of that gap
> in the l3-agent - when it removes the IP from it's qg- interface it can do a
> similar operation.
>

> -Brian
>
> On 06/26/2014 03:36 PM, Vishvananda Ishaya wrote:
> > I believe this will affect nova-network as well. We probably should use
> > something like the linux cutter utility to kill any ongoing connections
> > after we remove the nat rule.
> >
> > Vish
> >
> > On Jun 25, 2014, at 8:18 PM, Xurong Yang  wrote:
> >
> >> Hi folks,
> >>
> >> After we create an SSH connection to a VM via its floating ip, even
> >> though we have removed the floating ip association, we can still access
> >> the VM via that connection. Namely, SSH is not disconnected when the
> >> floating ip is not valid. Any good solution about this security issue?
> >>
> >> Thanks Xurong Yang ___
> >> OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___ OpenStack-dev mailing list
> >  OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Best Regards,

Gengyuan Zhang
NetEase Inc.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] default security group rules in neutron

2014-06-22 Thread shihanzhang
Hi, Lingxian
I think it indeed backport this feature to neutron, it will be very convenient 
for operators to use default security group!










At 2014-06-23 10:23:39, "Lingxian Kong"  wrote:
>Greetings
>
>We use neutron as network functionality implementation in nova, and as
>you know, there is a feature called 'os-security-group-default-rules'
>in nova extension[1], a hook mechanism to add customized rules when
>creating default security groups, which is a very useful feature to
>the administrators or operators (at least useful to us in our
>deployment). But I found this feature is valid only when using
>nova-network.
>
>So, for the functionality parity between nova-network and neutron and
>for our use case, I registered a blueprint[2] about default security
>group rules in Neutron days ago and related neutron spec[3], and I
>want it to be involved in Juno, so we can upgrade our deployment that
>time for this feature. I'm ready for the code implementation[3].
>
>But I still want to see what's the community's thought about including
>this feature in neutron, any of your feedback and comments are
>appreciated!
>
>[1] 
>https://blueprints.launchpad.net/nova/+spec/default-rules-for-default-security-group
>[2] 
>https://blueprints.launchpad.net/neutron/+spec/default-rules-for-default-security-group
>[3] https://review.openstack.org/98966
>[4] https://review.openstack.org/99320
>
>-- 
>Regards!
>---
>Lingxian Kong
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-06-19 Thread shihanzhang
hi Miguel Ángel,
I am very agree with you about the following point:
>  * physical implementation on the hosts (ipsets, nftables, ... )
--this can reduce the load of compute node.
>  * rpc communication mechanisms.
  --this can reduce the load of neutron server
can you help me to review my BP specs?










At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo"  wrote:
>
>  Hi it's a very interesting topic, I was getting ready to raise
>the same concerns about our security groups implementation, shihanzhang
>thank you for starting this topic.
>
>  Not only at low level where (with our default security group
>rules -allow all incoming from 'default' sg- the iptable rules
>will grow in ~X^2 for a tenant, and, the "security_group_rules_for_devices"
>rpc call from ovs-agent to neutron-server grows to message sizes of >100MB,
>generating serious scalability issues or timeouts/retries that 
>totally break neutron service.
>
>   (example trace of that RPC call with a few instances
>http://www.fpaste.org/104401/14008522/)
>
>  I believe that we also need to review the RPC calling mechanism
>for the OVS agent here, there are several possible approaches to breaking
>down (or/and CIDR compressing) the information we return via this api call.
>
>
>   So we have to look at two things here:
>
>  * physical implementation on the hosts (ipsets, nftables, ... )
>  * rpc communication mechanisms.
>
>   Best regards,
>Miguel Ángel.
>
>- Mensaje original - 
>
>> Do you though about nftables that will replace {ip,ip6,arp,eb}tables?
>> It also based on the rule set mechanism.
>> The issue in that proposition, it's only stable since the begin of the year
>> and on Linux kernel 3.13.
>> But there lot of pros I don't list here (leverage iptables limitation,
>> efficient update rule, rule set, standardization of netfilter commands...).
>
>> Édouard.
>
>> On Thu, Jun 19, 2014 at 8:25 AM, henry hly < henry4...@gmail.com > wrote:
>
>> > we have done some tests, but have different result: the performance is
>> > nearly
>> > the same for empty and 5k rules in iptable, but huge gap between
>> > enable/disable iptable hook on linux bridge
>> 
>
>> > On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang < ayshihanzh...@126.com >
>> > wrote:
>> 
>
>> > > Now I have not get accurate test data, but I can confirm the following
>> > > points:
>> > 
>> 
>> > > 1. In compute node, the iptable's chain of a VM is liner, iptable filter
>> > > it
>> > > one by one, if a VM in default security group and this default security
>> > > group have many members, but ipset chain is set, the time ipset filter
>> > > one
>> > > and many member is not much difference.
>> > 
>> 
>> > > 2. when the iptable rule is very large, the probability of failure that
>> > > iptable-save save the iptable rule is very large.
>> > 
>> 
>
>> > > At 2014-06-19 10:55:56, "Kevin Benton" < blak...@gmail.com > wrote:
>> > 
>> 
>
>> > > > This sounds like a good idea to handle some of the performance issues
>> > > > until
>> > > > the ovs firewall can be implemented down the the line.
>> > > 
>> > 
>> 
>> > > > Do you have any performance comparisons?
>> > > 
>> > 
>> 
>> > > > On Jun 18, 2014 7:46 PM, "shihanzhang" < ayshihanzh...@126.com > wrote:
>> > > 
>> > 
>> 
>
>> > > > > Hello all,
>> > > > 
>> > > 
>> > 
>> 
>
>> > > > > Now in neutron, it use iptable implementing security group, but the
>> > > > > performance of this implementation is very poor, there is a bug:
>> > > > > https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this
>> > > > > problem.
>> > > > > In
>> > > > > his test, w ith default security groups(which has remote security
>> > > > > group),
>> > > > > beyond 250-300 VMs, there were around 6k Iptable rules on evry
>> > > > > compute
>> > > > > node,
>> > > > > although his patch can reduce the processing time, but it don't solve
>> > > > > this
>> > > > > problem fundamentally. I have commit a BP to solve this problem

Re: [openstack-dev] [neutron]Performance of security group

2014-06-18 Thread shihanzhang
Now I have not get accurate test data, but I  can confirm the following points:
1. In compute node, the iptable's chain of a VM is liner, iptable filter it one 
by one, if a VM in default security group and this default security group have 
many members, but ipset chain is set, the time ipset filter one and many member 
is not much difference.
2. when the iptable rule is very large, the probability of  failure  that  
iptable-save save the iptable rule  is very large.






At 2014-06-19 10:55:56, "Kevin Benton"  wrote:


This sounds like a good idea to handle some of the performance issues until the 
ovs firewall can be implemented down the the line.
Do you have any performance comparisons?

On Jun 18, 2014 7:46 PM, "shihanzhang"  wrote:

Hello all,


Now in neutron, it use iptable implementing security group, but the performance 
of this  implementation is very poor, there is a 
bug:https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this problem. In 
his test, with default security groups(which has remote security group), beyond 
250-300 VMs, there were around 6k Iptable rules on evry compute node, although 
his patch can reduce the processing time, but it don't solve this problem 
fundamentally. I have commit a BP to solve this 
problem:https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security 
There are other people interested in this it?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]Performance of security group

2014-06-18 Thread shihanzhang
Hello all,


Now in neutron, it use iptable implementing security group, but the performance 
of this  implementation is very poor, there is a 
bug:https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this problem. In 
his test, with default security groups(which has remote security group), beyond 
250-300 VMs, there were around 6k Iptable rules on evry compute node, although 
his patch can reduce the processing time, but it don't solve this problem 
fundamentally. I have commit a BP to solve this 
problem:https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security 
There are other people interested in this it?___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring Extension in Neutron

2014-05-15 Thread shihanzhang
Hi Vinay,
I am very happy to participate in this discussion!






在 2014-05-16 00:03:35,"Kanzhe Jiang"  写道:

Hi Vinay,


I am interested. You could sign up a slot for a Network POD discussion.


Thanks,
Kanzhe



On Thu, May 15, 2014 at 7:13 AM, Vinay Yadhav  wrote:

Hi,


I am Vinay, working with Ericsson.


I am interested in the following blueprint regarding port mirroring extension 
in neutron: https://blueprints.launchpad.net/neutron/+spec/port-mirroring


I am close to finishing an implementation for this extension in OVS plugin and 
would be submitting a neutron spec related to the blueprint soon.


I would like to know other who are also interested in introducing Port 
Mirroring extension in neutron.


It would be great if we can discuss and collaborate in development and testing 
this extension


I am currently attending the OpenStack Summit in Atlanta, so if any of you are 
interested in the blueprint, we can meet here in the summit and discuss how to 
proceed with the blueprint.


Cheers,
main(i){putchar((5852758>>((i-1)/2)*8)-!(1&i)*'\r')^89&&main(++i);}


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







--
Kanzhe Jiang
MTS at BigSwitch___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] the question of security group rule

2014-04-08 Thread shihanzhang
Howdy Stackers!


There is a security group problem has been bothering me, but I do not know 
whether is appropriate to consult in there! For a security group rule, it will 
convert to iptable rules in compute node, but a iptable rule '-m state --state 
RELATED,ESTABLISHED -j RETURN' confuse me, according to my understanding this 
rule is to improve the performance of the security group by filteing the first 
package, there are other reasons? 
I hava a use-case: create a securiy group with few securiy group rule, then 
gradually increase the amount of security group rules based on business, if a 
VM in this security group also have connection, the new rules will not take 
effect, how could I deal with such scenarios?___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Interest in discussing vendor plugins for L3 services?

2014-02-14 Thread shihanzhang
I'm  interested in it. UTC8.
At 2014-02-15 00:31:47,"punal patel"  wrote:

I am interested. UTC - 8.



On Fri, Feb 14, 2014 at 1:48 AM, Nick Ma  wrote:
I'm also interested in it. UTC8.

--

cheers,
Li Ma



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron]A problem produced by accidentally deleting DHCP port

2014-02-13 Thread shihanzhang
Howdy folks!
I am a beginer of neutron, there is a problem which has confused me. In my 
environment using openvswich plugin, I delete the dhcp port by mistack, then I 
found the VM in the subnet whose dhcp port is deleted by my mistack can not get 
IP, I found the reason is  when a dhcp port is deleted, neutron will create the 
dhcp port automaticly, but the VIF TAP will not be deleted, this time there 
will be an IP address on the two TAP.
Even if the problem is caused by error operation, I think the dhcp port should 
not allow be deleted, because the port is created by neutron  automaticly, not 
by tenant. Similarly, the port on router is not allow be deleted. 
I want to know whether it is a problem? 
this is the bug I have commited:https://bugs.launchpad.net/neutron/+bug/1279683___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Why not allow to create a vm directly with two VIF in the same network

2014-01-27 Thread shihanzhang


Hi Paul:
  I am very glad to do the thing that puts together the practical use cases in 
which the same VM would benefit from multiple virtual connections to the same 
network, whatever it takes, I think we should at least guarantee the 
consistency of creating vms with nics and attaching nics.




在 2014-01-24 22:33:36,"CARVER, PAUL"  写道:


I agree that I’d like to see a set of use cases for this. This is the second 
time in as many days that I’ve heard about a desire to have such a thing but I 
still don’t think I understand any use cases adequately.

 

In the physical world it makes perfect sense, LACP, MLT, 
Etherchannel/Portchannel, etc. In the virtual world I need to see a detailed 
description of one or more use cases.

 

Shihanzhang, why don’t you start up an Etherpad or something and start putting 
together a list of one or more practical use cases in which the same VM would 
benefit from multiple virtual connections to the same network. If it really 
makes sense we ought to be able to clearly describe it.

 

--

Paul Carver

VO: 732-545-7377

Cell: 908-803-1656

E: pcar...@att.com

Q Instant Message

 

From: Day, Phil [mailto:philip@hp.com]
Sent: Friday, January 24, 2014 09:11
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova]Why not allow to create a vm directly with 
two VIF in the same network

 

I agree its oddly inconsistent (you’ll get used to that over time ;-)  - but to 
me it feels more like the validation is missing on the attach that that the 
create should allow two VIFs on the same network.   Since these are both 
virtualised (i.e share the same bandwidth, don’t provide any additional 
resilience, etc) I’m curious about why you’d want two VIFs in this 
configuration ?

 

From: shihanzhang [mailto:ayshihanzh...@126.com]
Sent: 24 January 2014 03:22
To:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova]Why not allow to create a vm directly with two 
VIF in the same network

 

I am a beginer of nova, there is a problem which has confused me, in the latest 
version, it not allowed to create a vm directly with two VIF in the same 
network, but allowed to add a VIF that it network is same with a existed 
VIF'network, there is the use case that a vm with two VIF in the same network, 
but why not allow to create the vm directly with two VIF in the same network?

 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Why not allow to create a vm directly with two VIF in the same network

2014-01-23 Thread shihanzhang
I am a beginer of nova, there is a problem which has confused me, in the latest 
version, it not allowed to create a vm directly with two VIF in the same 
network, but allowed to add a VIF that it network is same with a existed 
VIF'network, there is the use case that a vm with two VIF in the same network, 
but why not allow to create the vm directly with two VIF in the same network?___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] About ports backing floating IPs

2014-01-15 Thread shihanzhang
I am also perplexed about the ports backing floating ips. when plugin is 
selected  OVS or LB, the ip address belonging to a port backing that floating 
ip is really set on VIF of  'qg-' , what is the port real action?





At 2014-01-15 07:50:36,"Salvatore Orlando"  wrote:


TL;DR;
I have been looking back at the API and found out that it's a bit weird how 
floating IPs are mapped to ports. This might or might not be an issue, and 
several things can be done about it.
The rest of this post is a boring description of the problem and a possibly 
even more boring list of potential solutions.

Floating IPs are backed by ports on the external network where they are 
implemented; while there are good reason for doing so, this has some seemingly 
weird side effects, which are usually not visible to tenants as only admins are 
allowed (by default) to view the ports backing the floating IPs.


Assigning an external port to a floating IP is an easy way for ensuring the IP 
address used for the floating IP is then not reused for other allocation 
purposes on the external network; indeed admin users might start VMs on 
external networks as well. Conceptually, it is also an example of port-level 
insertion for a network service (DNAT/SNAT).

However these are the tricky aspects:
- IP Address changes: The API allows IP address updates for a floating IP port. 
However as it might be expected, the IP of the floating IP entities does not 
change, as well as the actual floating IP implemented in the backend (l3 agent 
or whatever the plugin uses).
- operational status: It is always down at least for plugins based on OVS/LB 
agents. This is because there is no actual VIF backing a floating IP, so there 
is nothing to wire.
- admin status: updating it just has no effect at all
- Security groups and  allowed address pairs: The API allows for updating them, 
but it is not clear whether something actually happens in the backend, and I'm 
even not entirely sure this makes sense at all. 

Why these things happen, whether it's intended behaviour, and whether it's the 
right behaviour it's debatable.

From my perspective, this leads to inconsistent state, as:
- the address reported in the floating IP entity might differ from the one on 
the port backing the floating IP
- operational status is wrongly represented as down
- expectations concerning operations on the port are not met (eg: admin status 
update)
And I reckon state inconsistencies should always be avoided.

Considering the situation described above, there are few possible options.


1- don't do anything, since the port backing the floating IP is hidden from the 
tenant.
This might be ok provided that a compelling reason for ignoring entities not 
visible to tenants is provided.
However it has to be noted that Neutron authZ logic, which is based on 
openstack.common would allow deployers to change that (*)

2- remove the need for a floating IP to be backed from a port
While this might seem simple, this has non-trivial implications as IPAM logic 
would need to become aware of floating IPs, and should  be discussed further.

3- leverage policy-based APIs, and transform floating IPs in a "remote access 
policy"
In this way the floating IP will become a policy to apply to a port; it will be 
easier to solve conflicts with security policies and it will be possible to 
just use IPs (or addressing policies) configured on the port.
However, this will be hardly backward compatible, and its feasibility depends 
on the outcome of the more general discussions on policy-based APIs for neutron.

4- Document the current behaviour
This is something which is probably worth doing anyway until a solution is 
agreed upon

Summarising, since all the 'technical' options sounds not feasible for the 
upcoming Icehouse release, it seems worth at least documenting the current 
behaviour, and start a discussion on whether we should do something about this 
and, if yes, what.


Regards and apologies for the long post,
Salvatore

(*) As an interesting corollary, the flexibility of making authZ policies 
super-configurable causes the API to be non-portable. However, this is a 
subject for a different discussion.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Implement NAPT in neutron (https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api)

2014-01-09 Thread shihanzhang


I think that these two BP is to achieve same function,it is very necessary to 
implement this function!
https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api


At 2014-01-09 16:56:20,"Nir Yechiel"  wrote:





From: "Dong Liu" 
To: "Nir Yechiel" 
Cc: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, January 8, 2014 5:36:14 PM
Subject: Re: [neutron] Implement NAPT in neutron 
(https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api)





在 2014年1月8日,20:24,Nir Yechiel  写道:


Hi Dong,



Can you please clarify this blueprint? Currently in Neutron, If an instance has 
a floating IP, then that will be used for both inbound and outbound traffic. If 
an instance does not have a floating IP, it can make connections out using the 
gateway IP (SNAT using PAT/NAT Overload). Does the idea in this blueprint is to 
implement PAT on both directions using only the gateway IP? Also, did you see 
this one [1]?



Thanks,

Nir



[1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding





I think my idea is duplicated with this one. 
https://blueprints.launchpad.net/neutron/+spec/access-vms-via-port-mapping



Sorry for missing this.


[Nir] Thanks, I wasn't familiar with this one. So is there a difference between 
those three?

https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding

https://blueprints.launchpad.net/neutron/+spec/access-vms-via-port-mapping

https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api


Looks like all of them are trying to solve the same challenge using the public 
gateway IP and PAT.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev