This is how I did it, and then retart each OSD one by one, but monritor
with ceph -s, when ceph is healthy, proceed with next OSD restart...
Make sure the networks are fine on physical nodes, that you can ping in
between...

[global]
x
x
x
x
x
x

#############################################
### REPLICATION NETWORK ON SEPARATE 10G NICs

# replication network
cluster network = 10.44.251.0/24

# public/client network
public network = 10.44.253.0/16

#############################################

[mon.xx]
mon_addr = x.x.x.x:6789
host = xx

[mon.yy]
mon_addr = x.x.x.x:6789
host = yy

[mon.zz]
mon_addr = x.x.x.x:6789
host = zz

On 14 March 2015 at 19:14, Georgios Dimitrakakis <[email protected]>
wrote:

> I thought that it was easy but apparently it's not!
>
> I have the following in my conf file
>
>
> mon_host = 192.168.1.100,192.168.1.101,192.168.1.102
> public_network = 192.168.1.0/24
> mon_initial_members = fu,rai,jin
>
>
> but still the 15.12.6.21 link is being saturated....
>
> Any ideas why???
>
> Should I put cluster network as well??
>
> Should I put each OSD in the CONF file???
>
>
> Regards,
>
>
> George
>
>
>
>
>
>  Andrija,
>>
>> thanks a lot for the useful info!
>>
>> I would also like to thank "Kingrat" at the IRC channel for his
>> useful advice!
>>
>>
>> I was under the wrong impression that public is the one used for RADOS.
>>
>> So I thought that public=external=internet and therefore I used that
>> one in my conf.
>>
>> I understand now that I should have specified in CEPH Public's
>> Network what I call
>> "internal" and which is the one that all machines are talking
>> directly to each other.
>>
>>
>> Thanks you all for the feedback!
>>
>>
>> Regards,
>>
>>
>> George
>>
>>
>>  Public network is clients-to-OSD traffic - and if you have NOT
>>> explicitely defined cluster network, than also OSD-to-OSD replication
>>> takes place over same network.
>>>
>>> Otherwise, you can define public and cluster(private) network - so OSD
>>> replication will happen over dedicated NICs (cluster network) and thus
>>> speed up.
>>>
>>> If i.e. replica count on pool is 3, that means, each 1GB of data
>>> writen to some particualr OSD, will generate 3 x 1GB of more writes,
>>> to the replicas... - which ideally will take place over separate NICs
>>> to speed up things...
>>>
>>> On 14 March 2015 at 17:43, Georgios Dimitrakakis  wrote:
>>>
>>>  Hi all!!
>>>>
>>>> What is the meaning of public_network in ceph.conf?
>>>>
>>>> Is it the network that OSDs are talking and transferring data?
>>>>
>>>> I have two nodes with two IP addresses each. One for internal
>>>> network MAILSCANNER WARNING: NUMERICAL LINKS ARE OFTEN MALICIOUS:
>>>> 192.168.1.0/24 [1]
>>>> and one external 15.12.6.*
>>>>
>>>> I see the following in my logs:
>>>>
>>>> osd.0 is down since epoch 2204, last address MAILSCANNER WARNING:
>>>> NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6826/33094 [2]
>>>> osd.1 is down since epoch 2206, last address MAILSCANNER WARNING:
>>>> NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6817/32463 [3]
>>>> osd.2 is down since epoch 2198, last address MAILSCANNER WARNING:
>>>> NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6843/34921 [4]
>>>> osd.3 is down since epoch 2200, last address MAILSCANNER WARNING:
>>>> NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6838/34208 [5]
>>>> osd.4 is down since epoch 2202, last address MAILSCANNER WARNING:
>>>> NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6831/33610 [6]
>>>> osd.5 is down since epoch 2194, last address MAILSCANNER WARNING:
>>>> NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6858/35948 [7]
>>>> osd.7 is down since epoch 2192, last address MAILSCANNER WARNING:
>>>> NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6871/36720 [8]
>>>> osd.8 is down since epoch 2196, last address MAILSCANNER WARNING:
>>>> NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6855/35354 [9]
>>>>
>>>> I ve managed to add a second node and during rebalancing I see that
>>>> data is transfered through
>>>> the internal 192.* but the external link is also saturated!
>>>>
>>>> What is being transferred from that?
>>>>
>>>> Any help much appreciated!
>>>>
>>>> Regards,
>>>>
>>>> George
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> [email protected] [10]
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [11]
>>>>
>>>
>>> --
>>>
>>> Andrija Panić
>>>
>>> Links:
>>> ------
>>> [1] http://192.168.1.0/24
>>> [2] http://15.12.6.21:6826/33094
>>> [3] http://15.12.6.21:6817/32463
>>> [4] http://15.12.6.21:6843/34921
>>> [5] http://15.12.6.21:6838/34208
>>> [6] http://15.12.6.21:6831/33610
>>> [7] http://15.12.6.21:6858/35948
>>> [8] http://15.12.6.21:6871/36720
>>> [9] http://15.12.6.21:6855/35354
>>> [10] mailto:[email protected]
>>> [11] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> [12] mailto:[email protected]
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 

Andrija Panić
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to