On 22 January 2013 10:57, John Nielsen li...@jnielsen.net wrote:
On Jan 19, 2013, at 7:56 PM, Joseph Glanville
joseph.glanvi...@orionvm.com.au wrote:
I assume it is now an EoIB driver. Does it replace the IPoIB driver?
Nope, it is upper-layer thing: https://lwn.net/Articles/509448/
Aye,
On Jan 17, 2013, at 11:19 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/17 Atchley, Scott atchle...@ornl.gov:
10GbE should get close to 1.2 GB/s compared to 1 GB/s for IB SDR. Latency
again depends on the Ethernet driver.
10GbE faster than IB SDR? Really ?
On Jan 22, 2013, at 4:06 PM, Atchley, Scott atchle...@ornl.gov wrote:
On Jan 17, 2013, at 11:19 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/17 Atchley, Scott atchle...@ornl.gov:
10GbE should get close to 1.2 GB/s compared to 1 GB/s for IB SDR. Latency
again
On Jan 19, 2013, at 7:56 PM, Joseph Glanville joseph.glanvi...@orionvm.com.au
wrote:
I assume it is now an EoIB driver. Does it replace the IPoIB driver?
Nope, it is upper-layer thing: https://lwn.net/Articles/509448/
Aye, its effectively a NAT translation layer that strips Ethernet
On 17 January 2013 20:46, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/16 Mark Nelson mark.nel...@inktank.com:
I don't know if I have to use a single two port IB card (switch
redundancy and no card redundancy) or
I have to use two single port cards. (or a single one
if sticking journals, flashcache, and xfs
journals for 6 osds on 1 drive!). There's probably a reasonable
endurance per cost argument for a severely under-subscribed 520 (or
other similar drive) as well. It'd be an interesting study to look at
how long it takes a small enterpise drives to die vs
On 01/17/2013 07:32 AM, Joseph Glanville wrote:
On 17 January 2013 20:46, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/16 Mark Nelsonmark.nel...@inktank.com:
I don't know if I have to use a single two port IB card (switch
redundancy and no card redundancy) or
I have
On Jan 17, 2013, at 8:37 AM, Mark Nelson mark.nel...@inktank.com wrote:
On 01/17/2013 07:32 AM, Joseph Glanville wrote:
On 17 January 2013 20:46, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/16 Mark Nelsonmark.nel...@inktank.com:
I don't know if I have to use a
On Thu, Jan 17, 2013 at 7:00 PM, Atchley, Scott atchle...@ornl.gov wrote:
On Jan 17, 2013, at 9:48 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/17 Atchley, Scott atchle...@ornl.gov:
IB DDR should get you close to 2 GB/s with IPoIB. I have gotten our IB QDR
PCI-E
On Jan 17, 2013, at 10:07 AM, Andrey Korolyov and...@xdel.ru wrote:
On Thu, Jan 17, 2013 at 7:00 PM, Atchley, Scott atchle...@ornl.gov wrote:
On Jan 17, 2013, at 9:48 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/17 Atchley, Scott atchle...@ornl.gov:
IB DDR
On Jan 17, 2013, at 10:14 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/17 Atchley, Scott atchle...@ornl.gov:
IPoIB appears as a traditional Ethernet device to Linux and can be used as
such. Ceph has no idea that it is not Ethernet.
Ok. Now it's clear.
AFAIK, a
On Jan 17, 2013, at 11:01 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/17 Atchley, Scott atchle...@ornl.gov:
Yes. It should get close to 1 GB/s where 1GbE is limited to about 125 MB/s.
Lower latency? Probably since most Ethernet drivers set interrupt coalescing
Hi,
Am 17.01.2013 17:12, schrieb Atchley, Scott:
On Jan 17, 2013, at 11:01 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/17 Atchley, Scott atchle...@ornl.gov:
Yes. It should get close to 1 GB/s where 1GbE is limited to about 125 MB/s.
Lower latency? Probably since
Hi,
Am 17.01.2013 17:21, schrieb Gandalf Corvotempesta:
2013/1/17 Stefan Priebe s.pri...@profihost.ag:
We're using bonded active/active 2x10GbE with Intel ixgbe and i'm able to
get 2.3GB/s.
Which kind of switch do you use?
HP 5920
Stefan
--
To unsubscribe from this list: send the line
, it doesn't matter if the thing it is sitting on
is a single disk, an SSD+disk flashcache thing, or a big RAID array. All
that changes is the probability of failure.
Ok, it will fail, but this should not be an issue, in a cluster like
ceph, right?
With or without flashcache or SSD, ceph should
Hi Mark,
Am 16.01.2013 um 22:53 schrieb Mark
With only 2 SSDs for 12 spinning disks, you'll need to make sure the SSDs are
really fast. I use Intel 520s for testing which are great, but I wouldn't
use them in production.
Why not? I use them for a ssd only ceph cluster.
Stefan--
To
Hi List,
I have introduced flashcache (https://github.com/facebook/flashcache)
aim at reduce Ceph metadata IOs to OSD's disk. Basically, for every data
writes, ceph need to write 3 things:
Pg log
Pg info
Actual data
First 2 requests are small, but for non-btrfs filesystem
17 matches
Mail list logo