Public bug reported:

-------------------------
System environment
-------------------------
Openstack: Queens
Ubuntu: 18.04.3 LTS

Neutron: 12.1.0
Nova: 17.0.12

Neutron is configured to have DVR and HA router.

-------------------------
Problem description
-------------------------
We have several projects in Openstack, where the default security gropu created 
during the initial setup of the project does now open all ports for instances, 
which have a floating ip assigned. Our expected behaviour is that all ports are 
closed for public access through the floating ip. For all other projects in the 
same Openstack installation, this is the case. Also if we use a new security 
group with identical rules as the default security group we cannot reproduce 
this error. Creating new security groups does fix the issue as a workaround. 
But still we are not sure, that there is no general issue for the future.

Security Group Rules
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field           | Value                                                       
                                                                                
                                                                   |
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at      | 2019-08-19T17:18:29Z                                        
                                                                                
                                                                   |
| description     | Default security group                                      
                                                                                
                                                                   |
| id              | 77015487-2991-4924-afd8-7b9468cacd3e                        
                                                                                
                                                                   |
| location        | Munch({'project': Munch({'domain_id': None, 'id': 
u'793d7760e0da415ca14013e5aaa0fb36', 'name': None, 'domain_name': None}), 
'cloud': '', 'region_name': 'RegionOne', 'zone': None})                         
   |
| name            | default                                                     
                                                                                
                                                                   |
| project_id      | 793d7760e0da415ca14013e5aaa0fb36                            
                                                                                
                                                                   |
| revision_number | 27                                                          
                                                                                
                                                                   |
| rules           | created_at='2019-11-22T07:57:32Z', direction='ingress', 
ethertype='IPv4', id='2a579604-50c6-4bd9-9ec7-07d835adca9d', 
remote_group_id='77015487-2991-4924-afd8-7b9468cacd3e', 
updated_at='2019-11-22T07:57:32Z' |
|                 | created_at='2019-11-22T07:59:19Z', direction='egress', 
ethertype='IPv6', id='9cbb5404-4ecc-46ef-b582-39895a7cdf2a', 
updated_at='2019-11-22T07:59:19Z'                                               
           |
|                 | created_at='2019-11-22T07:57:51Z', direction='ingress', 
ethertype='IPv6', id='afd436a5-273a-479e-9829-a62da3df7e0a', 
remote_group_id='77015487-2991-4924-afd8-7b9468cacd3e', 
updated_at='2019-11-22T07:57:51Z' |
|                 | created_at='2019-08-20T08:40:14Z', direction='ingress', 
ethertype='IPv4', id='de9dbe59-7c2a-4f20-8281-b62a41416538', protocol='icmp', 
remote_ip_prefix='0.0.0.0/0', updated_at='2019-08-20T08:40:14Z'          |
|                 | created_at='2019-08-19T17:18:29Z', direction='egress', 
ethertype='IPv4', id='fa3e699e-b8ce-4a79-bddb-db19233214e5', 
updated_at='2019-08-19T17:18:29Z'                                               
           |
| tags            | []                                                          
                                                                                
                                                                   |
| updated_at      | 2019-11-22T07:59:19Z                                        
                                                                                
                                                                   |
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

-------------------------
Investigation (so far)
-------------------------
We have checked the iptables entries for a server instance showing the 
described behaviour on the cpu node. We were not able to identify any issue 
compared to other instances in other projects not having the issue. We also 
checked the iptables rules of the qrouter responsible for the floating ip.

We found the following relevant rules for the instance:
-A neutron-openvswi-INPUT -m physdev --physdev-in tap68c54c8c-33 
--physdev-is-bridged -m comment --comment "Direct incoming traffic from VM to 
the security group chain." -j neutron-openvswi-o68c54c8c-3

-A neutron-openvswi-sg-chain -m physdev --physdev-in tap68c54c8c-33
--physdev-is-bridged -m comment --comment "Jump to the VM specific
chain." -j neutron-openvswi-o68c54c8c-3

-A neutron-openvswi-o68c54c8c-3 -s 0.0.0.0/32 -d 255.255.255.255/32 -p udp -m 
udp --sport 68 --dport 67 -m comment --comment "Allow DHCP client traffic." -j 
RETURN
-A neutron-openvswi-o68c54c8c-3 -j neutron-openvswi-s68c54c8c-3

-A neutron-openvswi-s68c54c8c-3 -s 10.0.5.11/32 -m mac --mac-source 
FA:16:3E:41:5A:96 -m comment --comment "Allow traffic from defined IP/MAC 
pairs." -j RETURN
-A neutron-openvswi-s68c54c8c-3 -m comment --comment "Drop traffic without an 
IP/MAC allow rule." -j DROP

-A neutron-openvswi-o68c54c8c-3 -p udp -m udp --sport 68 --dport 67 -m comment 
--comment "Allow DHCP client traffic." -j RETURN
-A neutron-openvswi-o68c54c8c-3 -p udp -m udp --sport 67 --dport 68 -m comment 
--comment "Prevent DHCP Spoofing by VM." -j DROP
-A neutron-openvswi-o68c54c8c-3 -m state --state RELATED,ESTABLISHED -m comment 
--comment "Direct packets associated with a known session to the RETURN chain." 
-j RETURN
-A neutron-openvswi-o68c54c8c-3 -j RETURN
-A neutron-openvswi-o68c54c8c-3 -m state --state INVALID -m comment --comment 
"Drop packets that appear related to an existing connection (e.g. TCP ACK/FIN) 
but do not have an entry in conntrack." -j DROP
-A neutron-openvswi-o68c54c8c-3 -m comment --comment "Send unmatched traffic to 
the fallback chain." -j neutron-openvswi-sg-fallback

-A neutron-openvswi-sg-chain -j ACCEPT


Further we tried to add a rule to the new security group referencing to 
remote_group = default like: direction=ingress, protocoll=any, ethertype=IPv4 
and remote_group_id=<id_of_default>. We wanted to test, what happens, if we try 
to allow the access to the machine for all instances belonging to the default 
security group as well. As soon as we added this rule, we could reach every 
port for the test instance via the floating ip again. After deleting the rule 
referencing to the default security group, the ports were not reachable anymore.

After we created new security groups and assigned them to the instances,
like the default security group before, we hat the same issue for the
newly created security groups. Even after removing them and only
reassigning to one single server, all ports for this server are open.

So we have projects, which do not show this behaviour and we have
projects, which do. It is independent from the CPU node and can be
reproduced on different ones, even with new server instances or security
groups.

-------------------------
Further investigation
-------------------------
We would like to investigate further into this issue, to find out, why this 
behaviour happens and if it is a bug or only a misconfiguration. Can someone 
please suggest what information to provide in addition to nail down this issue?

** Affects: neutron
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1861496

Title:
  All ports of server instance are open even no security group does
  allow this

Status in neutron:
  New

Bug description:
  -------------------------
  System environment
  -------------------------
  Openstack: Queens
  Ubuntu: 18.04.3 LTS

  Neutron: 12.1.0
  Nova: 17.0.12

  Neutron is configured to have DVR and HA router.

  -------------------------
  Problem description
  -------------------------
  We have several projects in Openstack, where the default security gropu 
created during the initial setup of the project does now open all ports for 
instances, which have a floating ip assigned. Our expected behaviour is that 
all ports are closed for public access through the floating ip. For all other 
projects in the same Openstack installation, this is the case. Also if we use a 
new security group with identical rules as the default security group we cannot 
reproduce this error. Creating new security groups does fix the issue as a 
workaround. But still we are not sure, that there is no general issue for the 
future.

  Security Group Rules
  
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  | Field           | Value                                                     
                                                                                
                                                                     |
  
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  | created_at      | 2019-08-19T17:18:29Z                                      
                                                                                
                                                                     |
  | description     | Default security group                                    
                                                                                
                                                                     |
  | id              | 77015487-2991-4924-afd8-7b9468cacd3e                      
                                                                                
                                                                     |
  | location        | Munch({'project': Munch({'domain_id': None, 'id': 
u'793d7760e0da415ca14013e5aaa0fb36', 'name': None, 'domain_name': None}), 
'cloud': '', 'region_name': 'RegionOne', 'zone': None})                         
   |
  | name            | default                                                   
                                                                                
                                                                     |
  | project_id      | 793d7760e0da415ca14013e5aaa0fb36                          
                                                                                
                                                                     |
  | revision_number | 27                                                        
                                                                                
                                                                     |
  | rules           | created_at='2019-11-22T07:57:32Z', direction='ingress', 
ethertype='IPv4', id='2a579604-50c6-4bd9-9ec7-07d835adca9d', 
remote_group_id='77015487-2991-4924-afd8-7b9468cacd3e', 
updated_at='2019-11-22T07:57:32Z' |
  |                 | created_at='2019-11-22T07:59:19Z', direction='egress', 
ethertype='IPv6', id='9cbb5404-4ecc-46ef-b582-39895a7cdf2a', 
updated_at='2019-11-22T07:59:19Z'                                               
           |
  |                 | created_at='2019-11-22T07:57:51Z', direction='ingress', 
ethertype='IPv6', id='afd436a5-273a-479e-9829-a62da3df7e0a', 
remote_group_id='77015487-2991-4924-afd8-7b9468cacd3e', 
updated_at='2019-11-22T07:57:51Z' |
  |                 | created_at='2019-08-20T08:40:14Z', direction='ingress', 
ethertype='IPv4', id='de9dbe59-7c2a-4f20-8281-b62a41416538', protocol='icmp', 
remote_ip_prefix='0.0.0.0/0', updated_at='2019-08-20T08:40:14Z'          |
  |                 | created_at='2019-08-19T17:18:29Z', direction='egress', 
ethertype='IPv4', id='fa3e699e-b8ce-4a79-bddb-db19233214e5', 
updated_at='2019-08-19T17:18:29Z'                                               
           |
  | tags            | []                                                        
                                                                                
                                                                     |
  | updated_at      | 2019-11-22T07:59:19Z                                      
                                                                                
                                                                     |
  
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

  -------------------------
  Investigation (so far)
  -------------------------
  We have checked the iptables entries for a server instance showing the 
described behaviour on the cpu node. We were not able to identify any issue 
compared to other instances in other projects not having the issue. We also 
checked the iptables rules of the qrouter responsible for the floating ip.

  We found the following relevant rules for the instance:
  -A neutron-openvswi-INPUT -m physdev --physdev-in tap68c54c8c-33 
--physdev-is-bridged -m comment --comment "Direct incoming traffic from VM to 
the security group chain." -j neutron-openvswi-o68c54c8c-3

  -A neutron-openvswi-sg-chain -m physdev --physdev-in tap68c54c8c-33
  --physdev-is-bridged -m comment --comment "Jump to the VM specific
  chain." -j neutron-openvswi-o68c54c8c-3

  -A neutron-openvswi-o68c54c8c-3 -s 0.0.0.0/32 -d 255.255.255.255/32 -p udp -m 
udp --sport 68 --dport 67 -m comment --comment "Allow DHCP client traffic." -j 
RETURN
  -A neutron-openvswi-o68c54c8c-3 -j neutron-openvswi-s68c54c8c-3

  -A neutron-openvswi-s68c54c8c-3 -s 10.0.5.11/32 -m mac --mac-source 
FA:16:3E:41:5A:96 -m comment --comment "Allow traffic from defined IP/MAC 
pairs." -j RETURN
  -A neutron-openvswi-s68c54c8c-3 -m comment --comment "Drop traffic without an 
IP/MAC allow rule." -j DROP

  -A neutron-openvswi-o68c54c8c-3 -p udp -m udp --sport 68 --dport 67 -m 
comment --comment "Allow DHCP client traffic." -j RETURN
  -A neutron-openvswi-o68c54c8c-3 -p udp -m udp --sport 67 --dport 68 -m 
comment --comment "Prevent DHCP Spoofing by VM." -j DROP
  -A neutron-openvswi-o68c54c8c-3 -m state --state RELATED,ESTABLISHED -m 
comment --comment "Direct packets associated with a known session to the RETURN 
chain." -j RETURN
  -A neutron-openvswi-o68c54c8c-3 -j RETURN
  -A neutron-openvswi-o68c54c8c-3 -m state --state INVALID -m comment --comment 
"Drop packets that appear related to an existing connection (e.g. TCP ACK/FIN) 
but do not have an entry in conntrack." -j DROP
  -A neutron-openvswi-o68c54c8c-3 -m comment --comment "Send unmatched traffic 
to the fallback chain." -j neutron-openvswi-sg-fallback

  -A neutron-openvswi-sg-chain -j ACCEPT

  
  Further we tried to add a rule to the new security group referencing to 
remote_group = default like: direction=ingress, protocoll=any, ethertype=IPv4 
and remote_group_id=<id_of_default>. We wanted to test, what happens, if we try 
to allow the access to the machine for all instances belonging to the default 
security group as well. As soon as we added this rule, we could reach every 
port for the test instance via the floating ip again. After deleting the rule 
referencing to the default security group, the ports were not reachable anymore.

  After we created new security groups and assigned them to the
  instances, like the default security group before, we hat the same
  issue for the newly created security groups. Even after removing them
  and only reassigning to one single server, all ports for this server
  are open.

  So we have projects, which do not show this behaviour and we have
  projects, which do. It is independent from the CPU node and can be
  reproduced on different ones, even with new server instances or
  security groups.

  -------------------------
  Further investigation
  -------------------------
  We would like to investigate further into this issue, to find out, why this 
behaviour happens and if it is a bug or only a misconfiguration. Can someone 
please suggest what information to provide in addition to nail down this issue?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1861496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to