Ok guys
I have success deploy a 3 node ceph server.
In this server I have:
Node1
OSD.0 = 4 TB
OSD.1 = 4 TB
OSD.2 = 4 TB
OSD.3 = 4 TB
Node2
OSD.4 = 3 TB
OSD.5 = 3 TB
OSD.6 = 3 TB
OSD.7 = 3 TB
Node3
OSD.8 = 3 TB
OSD.9 = 3 TB
OSD.10 = 2 TB
In the Ceph cluster, the total among of data storage
That is it, as I understand it Josh. you basically need to turn your switch
in to X seperate switches so each nodes nic, is running on a "seperate"
network.
if you were to do the same thing physically without any config, with 3
nodes, you would need to have as many seperate switches as you wanted
sorry, should say "mausezahn". It's a part of Netsniff
http://netsniff-ng.org/
On Fri, Aug 24, 2018, 5:15 PM Josh Knight wrote:
> Just guessing here, if the switch doesn't support rr on its port channels,
> then using separate VLANs instead of bundles on the switch is essentially
> wiring
Just guessing here, if the switch doesn't support rr on its port channels,
then using separate VLANs instead of bundles on the switch is essentially
wiring nodeA to nodeB. That way you don't hit the port channel hashing on
the switch and you keep the rr as-is from A to B.
I would also try using
I can get 3 gbps. At least 1.3 gbps.
Don't know why!
Em 24/08/2018 17:36, "mj" escreveu:
> Hi Mark,
>
> On 08/24/2018 06:20 PM, Mark Adams wrote:
>
>> also, balance-rr through a switch requires each nic to be on a seperate
>> vlan. You probably need to remove your lacp config also but this
Hi Mark,
On 08/24/2018 06:20 PM, Mark Adams wrote:
also, balance-rr through a switch requires each nic to be on a seperate
vlan. You probably need to remove your lacp config also but this depends on
switch model and configuration. so safest idea is remove it.
then I belive your iperf test
On 24.08.2018 12:01, Gilberto Nunes wrote:
So what bond mode I suppose to use in order to get more speed? I mean how
to join the nic to get 4 GB? I will use Ceph!
I know I should use 10gb but I dont have it right now.
Thanks
Em 24/08/2018 03:01, "Dietmar Maurer" escreveu:
This 802.3ad do no
also, balance-rr through a switch requires each nic to be on a seperate
vlan. You probably need to remove your lacp config also but this depends on
switch model and configuration. so safest idea is remove it.
so I think you have 3 nodes
for example:
node1:
ens0 on port 1 vlan 10
ens1 on
I don't know your topology, I'm assuming you're going from nodeA ->
switch -> nodeB ? Make sure that entire path is using RR. You could
verify this with interface counters on the various hops. If a single hop
is not doing it correctly, it will limit the throughput.
On Fri, Aug 24, 2018 at
So I try balance-rr with LAG in the switch and still get 1 GB
pve-ceph02:~# iperf3 -c 10.10.10.100
Connecting to host 10.10.10.100, port 5201
[ 4] local 10.10.10.110 port 52674 connected to 10.10.10.100 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00
Depending on your topology/configuration, you could try to use bond-rr mode
in Linux instead of 802.3ad.
Bond-rr mode is the only mode that will put pkts for the same mac/ip/port
tuple across multiple interfaces. This will work well for UDP but TCP may
suffer performance issues because pkts can
Hi,
On Fri, 24 Aug 2018 12:15:06 +0200
Thomas Lamprecht wrote:
> Hi,
>
> On 8/24/18 11:51 AM, Dreyer, Jan, SCM-IT wrote:
> > Hi,
> >
> > my configuration:
> > HP DL380 G5 with Smart Array P400
> > Proxmox VE 5.2-1
> > name: 4.4.128-1-pve #1 SMP PVE 4.4.128-111 (Wed, 23 May 2018
> > 14:00:02
Hi,
Yes, it is our undertanding that if the hardware (switch) supports it,
"bond-xmit-hash-policy layer3+4" gives you best spread.
But it will still give you 4 'lanes' of 1GB. Ceph will connect using
different ports, ip's etc, en each connection should use a different
lane, so altogether,
If using standard 802.3ad (LACP) you will always get only the performance of a
single link between one host and another.
Using "bond-xmit-hash-policy layer3+4" might get you a better performance but
is not standard LACP.
Am 24.08.18 um 12:01 schrieb Gilberto Nunes:
> So what bond mode I
Hi,
On 8/24/18 11:51 AM, Dreyer, Jan, SCM-IT wrote:
> Hi,
>
> my configuration:
> HP DL380 G5 with Smart Array P400
> Proxmox VE 5.2-1
> name: 4.4.128-1-pve #1 SMP PVE 4.4.128-111 (Wed, 23 May 2018 14:00:02 +)
> x86_64 GNU/Linux
> This system is currently running ZFS filesystem version 5.
>
So what bond mode I suppose to use in order to get more speed? I mean how
to join the nic to get 4 GB? I will use Ceph!
I know I should use 10gb but I dont have it right now.
Thanks
Em 24/08/2018 03:01, "Dietmar Maurer" escreveu:
> > This 802.3ad do no suppose to agrengate the speed of all
Hi,
my configuration:
HP DL380 G5 with Smart Array P400
Proxmox VE 5.2-1
name: 4.4.128-1-pve #1 SMP PVE 4.4.128-111 (Wed, 23 May 2018 14:00:02 +)
x86_64 GNU/Linux
This system is currently running ZFS filesystem version 5.
My problem: When trying to update to a higher kernel (I tried 4.10
> This 802.3ad do no suppose to agrengate the speed of all available NIC??
No, not really. One connection is limited to 1GB. If you start more
parallel connections you can gain more speed.
___
pve-user mailing list
pve-user@pve.proxmox.com
18 matches
Mail list logo