Re: [ceph-users] ceph-ansible firewalld blocking ceph comms

2019-07-25 Thread Nathan Harper
This is a new issue to us, and did not have the same problem running the same 
activity on our test system. 

Regards,
Nathan

> On 25 Jul 2019, at 22:00, solarflow99  wrote:
> 
> I used ceph-ansible just fine, never had this problem.  
> 
>> On Thu, Jul 25, 2019 at 1:31 PM Nathan Harper  
>> wrote:
>> Hi all,
>> 
>> We've run into a strange issue with one of our clusters managed with 
>> ceph-ansible.   We're adding some RGW nodes to our cluster, and so re-ran 
>> site.yml against the cluster.  The new RGWs added successfully, but
>> 
>> When we did, we started to get slow requests, effectively across the whole 
>> cluster.   Quickly we realised that the firewall was now (apparently) 
>> blocking Ceph communications.   I say apparently, because the config looks 
>> correct:
>> 
>>> [root@osdsrv05 ~]# firewall-cmd --list-all
>>> public (active)
>>>   target: default
>>>   icmp-block-inversion: no
>>>   interfaces:
>>>   sources: 172.20.22.0/24 172.20.23.0/24
>>>   services: ssh dhcpv6-client ceph
>>>   ports:
>>>   protocols:
>>>   masquerade: no
>>>   forward-ports:
>>>   source-ports:
>>>   icmp-blocks:
>>>   rich rules:
>> 
>> If we drop the firewall everything goes back healthy.   All the clients 
>> (Openstack cinder) are on the 172.20.22.0 network (172.20.23.0 is the 
>> replication network).  Has anyone seen this?
>> -- 
>> Nathan Harper // IT Systems Lead
>> 
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-ansible firewalld blocking ceph comms

2019-07-25 Thread solarflow99
I used ceph-ansible just fine, never had this problem.

On Thu, Jul 25, 2019 at 1:31 PM Nathan Harper 
wrote:

> Hi all,
>
> We've run into a strange issue with one of our clusters managed with
> ceph-ansible.   We're adding some RGW nodes to our cluster, and so re-ran
> site.yml against the cluster.  The new RGWs added successfully, but
>
> When we did, we started to get slow requests, effectively across the whole
> cluster.   Quickly we realised that the firewall was now (apparently)
> blocking Ceph communications.   I say apparently, because the config looks
> correct:
>
> [root@osdsrv05 ~]# firewall-cmd --list-all
>> public (active)
>>   target: default
>>   icmp-block-inversion: no
>>   interfaces:
>>   sources: 172.20.22.0/24 172.20.23.0/24
>>   services: ssh dhcpv6-client ceph
>>   ports:
>>   protocols:
>>   masquerade: no
>>   forward-ports:
>>   source-ports:
>>   icmp-blocks:
>>   rich rules:
>>
>
> If we drop the firewall everything goes back healthy.   All the clients
> (Openstack cinder) are on the 172.20.22.0 network (172.20.23.0 is the
> replication network).  Has anyone seen this?
> --
> *Nathan Harper* // IT Systems Lead
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-ansible firewalld blocking ceph comms

2019-07-25 Thread Nathan Harper
Hi all,

We've run into a strange issue with one of our clusters managed with
ceph-ansible.   We're adding some RGW nodes to our cluster, and so re-ran
site.yml against the cluster.  The new RGWs added successfully, but

When we did, we started to get slow requests, effectively across the whole
cluster.   Quickly we realised that the firewall was now (apparently)
blocking Ceph communications.   I say apparently, because the config looks
correct:

[root@osdsrv05 ~]# firewall-cmd --list-all
> public (active)
>   target: default
>   icmp-block-inversion: no
>   interfaces:
>   sources: 172.20.22.0/24 172.20.23.0/24
>   services: ssh dhcpv6-client ceph
>   ports:
>   protocols:
>   masquerade: no
>   forward-ports:
>   source-ports:
>   icmp-blocks:
>   rich rules:
>

If we drop the firewall everything goes back healthy.   All the clients
(Openstack cinder) are on the 172.20.22.0 network (172.20.23.0 is the
replication network).  Has anyone seen this?
-- 
*Nathan Harper* // IT Systems Lead
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com