Re: [ceph-users] Question regarding client-network

2019-01-31 Thread Buchberger, Carsten
Thank you - we were expecting that, but wanted to be sure.
By the way - we are running our clusters on IPv6-BGP, to achieve massive 
scalability and load-balancing ;-)

Mit freundlichen Grüßen
Carsten Buchberger


[WITCOM_LOGO_CS4_CMYK_1.png]

WiTCOM Wiesbadener Informations-
und Telekommunikations GmbH

Carsten Buchberger
Technik
Netze & Systeme
___

fon +49 611-26244-211
fax +49 611-26244-262
c.buchber...@witcom.de
www.witcom.de

Konradinerallee 25
65189 Wiesbaden

Technische-Hotline:
08000-948266 (08000-WiTCOM)

HRB 10344 Wiesbaden
Steuernummer: 43 248 1943 6
Geschäftsführer: Dipl-Ing. Ralf Jung
Vorsitzender des Aufsichtsrates: Bürgermeister Dr. Oliver Franz

Ein Unternehmen der ESWE Versorgungs AG

[20y_witcom]


WiTCOM bringt alle Wiesbadener Gewerbegebiete ans Glasfasernetz! Haben Sie 
Fragen zum Ausbau? Dann rufen Sie uns an: 0611-26244-135
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Question regarding client-network

2019-01-29 Thread Buchberger, Carsten
Hi,

it might be dumb question - our ceph-cluster runs with dedicated client and 
cluster network.

I understand it like this : client -network is the network interface from where 
the client connections come from (from the mon & osd perspective), regardless 
of the source-ip-address.
So as long as there is ip-connectivity between the client, and the 
client-network ip -adressses of our ceph-cluster everything is fine ?
Or is the client-network on ceph-side some kind of acl, that denies access if 
the client does not originate from the defined network ? The latter one would 
be bad ;-)

Best regards
Carsten Buchberger


[20y_witcom]


WiTCOM bringt alle Wiesbadener Gewerbegebiete ans Glasfasernetz! Haben Sie 
Fragen zum Ausbau? Dann rufen Sie uns an: 0611-26244-135
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Migrating from pre-luminous multi-root crush hierachy

2018-08-23 Thread Buchberger, Carsten
Hello,

when we started with ceph we wanted to mix different disk-types per host. Since 
that was before device-classes were available we followed the advice to create 
a multi root-hierachy and disk-type-specific hosts.

So currently the osd tree looks kind of like this

-8  218.21320 root capacity-root
-7   36.36887 host ceph-dc-1-01-osd-01-sata-ssd
  2 capacity   3.63689 osd.2 up  1.0 
1.0
  9 capacity   3.63689 osd.9 up  1.0 
1.0
10 capacity   3.63689 osd.10up  1.0 
1.0
15 capacity   3.63689 osd.15up  1.0 
1.0
20 capacity   3.63689 osd.20up  1.0 
1.0
24 capacity   3.63689 osd.24up  1.0 
1.0
30 capacity   3.63689 osd.30up  1.0 
1.0
33 capacity   3.63689 osd.33up  1.0 
1.0
41 capacity   3.63689 osd.41up  1.0 
1.0
46 capacity   3.63689 osd.46up  1.0 
1.0
-9   36.36887 host ceph-dc-1-01-osd-02-sata-ssd
  0 capacity   3.63689 osd.0 up  1.0 
1.0
  1 capacity   3.63689 osd.1 up  1.0 
1.0
  5 capacity   3.63689 osd.5 up  1.0 
1.0
  7 capacity   3.63689 osd.7 up  1.0 
1.0
12 capacity   3.63689 osd.12up  1.0 
1.0
13 capacity   3.63689 osd.13up  1.0 
1.0
17 capacity   3.63689 osd.17up  1.0 
1.0
18 capacity   3.63689 osd.18up  1.0 
1.0
21 capacity   3.63689 osd.21up  1.0 
1.0
23 capacity   3.63689 osd.23up  1.0 
1.0
..
-73  10.46027 root ssd-root
-463.48676 host ceph-dc-1-01-osd-01-ssd
38  ssd   0.87169 osd.38up  1.0 
1.0
42  ssd   0.87169 osd.42up  1.0 
1.0
47  ssd   0.87169 osd.47up  1.0 
1.0
61  ssd   0.87169 osd.61up  1.0 
1.0
-523.48676 host ceph-dc-1-01-osd-02-ssd
40  ssd   0.87169 osd.40up  1.0 
1.0
43  ssd   0.87169 osd.43up  1.0 
1.0
45  ssd   0.87169 osd.45up  1.0 
1.0
49  ssd   0.87169 osd.49up  1.0 
1.0

We recently upgrade to luminous (you can see the device-classes in the output). 
So it should be possible to have one single root, no fake hosts and just use 
the device-class.
We added some hosts/osds recently which back a new pools, so we also created a 
new hierarchy and crush rules for those. That worked perfect, and of course we 
want to have that for the old parts of the cluster, too

Is it possible to move the existing osd's to a new root/bucket without having 
to move all the data around (which might be difficult cause we don't have 
enough capacity to move 50 % of the osd's ) ?

I imagine something like:


1. Magic maintenance command

2. Move osds to new bucket in hierarchy

3. Update either existing crush-rule or create new rule an update pool

4. Magic maintenance-done command

We also plan to migrate the ods to bluestore. Should we do this
a) before moving
b) after moving

I hope our issue is clear.

Best regards
Carsten




[20y_witcom]


WiTCOM SERVER HOUSING mit ?kostrom - WiTCOM bietet seinen Kunden im DC1 eine 
komplette Versorgung mit ESWE Naturstrom.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com