On Wed, 08 Oct 2014 00:45:06 + Scott Laird wrote:
IIRC, one thing to look out for is that there are two ways to do IP over
Infiniband. You can either do IP over Infiniband directly (IPoIB), or
encapsulate Ethernet in Infiniband (EoIB), and then do IP over the fake
Ethernet network.
If you want to build up with Viatta.
And this give you the possibility to have a fully feature OS.
What kind of hardware would you use to build up a switch?
Il 08/10/2014 09:10, Christian Balzer ha scritto:
On Wed, 08 Oct 2014 00:45:06 + Scott Laird wrote:
IIRC, one thing to look out for
On Wed, Oct 8, 2014 at 8:15 AM, Massimiliano Cuttini m...@phoenixweb.it
wrote:
If you want to build up with Viatta.
And this give you the possibility to have a fully feature OS.
What kind of hardware would you use to build up a switch?
Hard to beat the Quanta T3048-LY2, 48 10 gig, 4 40 gig.
Il 08/10/2014 14:39, Nathan Stratton ha scritto:
On Wed, Oct 8, 2014 at 8:15 AM, Massimiliano Cuttini
m...@phoenixweb.it mailto:m...@phoenixweb.it wrote:
If you want to build up with Viatta.
And this give you the possibility to have a fully feature OS.
What kind of hardware
On Wed, Oct 8, 2014 at 9:25 AM, Massimiliano Cuttini m...@phoenixweb.it
wrote:
Il 08/10/2014 14:39, Nathan Stratton ha scritto:
On Wed, Oct 8, 2014 at 8:15 AM, Massimiliano Cuttini m...@phoenixweb.it
wrote:
If you want to build up with Viatta.
And this give you the possibility to have
Hi Christian,
When you say 10 gig infiniband, do you mean QDRx4 Infiniband (usually
flogged as 40Gb/s even though it is 32Gb/s, but who's counting), which
tends to be the same basic hardware as the 10Gb/s Ethernet offerings from
Mellanox?
A brand new 18 port switch of that caliber will only
I've done this two ways in the past. Either I'll give each machine an
Infiniband network link and a 1000baseT link and use the Infiniband one as
the private network for Ceph, or I'll throw an Infiniband card into a PC
and run something like Vyatta/VyOS on it and make it a router, so IP
traffic
On Tue, 07 Oct 2014 20:40:31 + Scott Laird wrote:
I've done this two ways in the past. Either I'll give each machine an
Infiniband network link and a 1000baseT link and use the Infiniband one
as the private network for Ceph, or I'll throw an Infiniband card into a
PC and run something
IIRC, one thing to look out for is that there are two ways to do IP over
Infiniband. You can either do IP over Infiniband directly (IPoIB), or
encapsulate Ethernet in Infiniband (EoIB), and then do IP over the fake
Ethernet network.
IPoIB is more common, but I'd assume that IB-Ethernet bridges
Christian Balzer ch...@gol.com wrote:
Any decent switch with LACP will do really.
And with that I mean Cisco, Brocade etc.
But that won't give you redundancy if a switch fails, see below.
TRILL ( http://en.wikipedia.org/wiki/TRILL_(computing) ) based switches
(we have some Brocade VDX
On Sun, Oct 5, 2014 at 11:19 PM, Ariel Silooy ar...@bisnis2030.com wrote:
Hello fellow ceph user, right now we are researching ceph for our storage.
We have a cluster of 3 OSD nodes (and 5 MON) for our RBD disk which for
now we are using the NFS proxy setup. On each OSD node we have 4x 1G
On Mon, 6 Oct 2014 09:17:03 + Carl-Johan Schenström wrote:
Christian Balzer ch...@gol.com wrote:
Any decent switch with LACP will do really.
And with that I mean Cisco, Brocade etc.
But that won't give you redundancy if a switch fails, see below.
TRILL (
First, thank you for your reply
TRILL ( http://en.wikipedia.org/wiki/TRILL_(computing) ) based switches
(we have some Brocade VDX ones) have the advantage that they can do LACP
over 2 switches.
Meaning you can get full speed if both switches are running and still get
redundancy (at half speed)
Thank you for your reply,
We use a stacked pair of Dell Powerconnect 6248's with the 2*12 Gb/s
interconnect and single 10 GbE links, with 1 GbE failovers using Linux bonding
active/backup mode, to the four OSD nodes.
I'm sorry, but I just have to ask, what kind of 10GbE NIC do you use? If
Thank you for your reply,
I really would think about something faster then gig ethernet. Merchant
silicon is changing the world, take a look at guys like Quanta, I just
bought two T3048-LY2 switches with Cumulus software for under 6k each.
That gives you 48 10 gig ports and 4 40 gig ports to
On Mon, 6 Oct 2014 08:14:20 -0400 Nathan Stratton wrote:
On Sun, Oct 5, 2014 at 11:19 PM, Ariel Silooy ar...@bisnis2030.com
wrote:
Hello fellow ceph user, right now we are researching ceph for our
storage.
We have a cluster of 3 OSD nodes (and 5 MON) for our RBD disk which for
now
Hello,
On Mon, 06 Oct 2014 10:19:28 +0700 Ariel Silooy wrote:
Hello fellow ceph user, right now we are researching ceph for our
storage.
We have a cluster of 3 OSD nodes (and 5 MON) for our RBD disk which for
now we are using the NFS proxy setup. On each OSD node we have 4x 1G
Intel
17 matches
Mail list logo