JF
 
From my experience - NSD over 10GB doesn't translate into good performance, regardless of how good your back end storage architecture is,  so I guess it depends on what your trying to achieve.
 
if your trying to achieve a "we build a cluster and presented a filesystem" then yes NSD over Ethernet is probably fine,  if your aiming to deliver a guaranteed performance profile of 10's of GB/s then Ethernet is probably not the network architecture of choice. 
 
Just in the same way that if you want a low latency block storage access environment you won't choose ISCSI you will use FC or IB access.
 
 
in reference to the original post,  I would be curious to know how many Switches / how many ports were proposed in the 16GB FC environment and did the person who proposed it have a good understanding of the actual requirements ?  I think at least in Australia you would have a good chance of getting a pair of switches to connect 2 NSD servers to a single FC storage array for far less than that.
 
 
Andrew Beattie
Software Defined Storage  - IT Specialist
Phone: 614-2133-7927
 
 
----- Original message -----
From: Jan-Frode Myklebust <janfr...@tanso.net>
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Cc:
Subject: Re: [gpfsug-discuss] Anybody running GPFS over iSCSI?
Date: Mon, Dec 17, 2018 5:50 PM
 
I’d be curious to hear if all these arguments against iSCSI shouldn’t also apply to NSD protocol over TCP/IP?


-jf
man. 17. des. 2018 kl. 01:22 skrev Jonathan Buzzard <jonathan.buzz...@strath.ac.uk>:
On 13/12/2018 20:54, Buterbaugh, Kevin L wrote:

[SNIP]

>
> Two things that I am already aware of are:  1) use jumbo frames, and 2)
> run iSCSI over it’s own private network.  Other things I should be aware
> of?!?
>

Yes, don't do it. Really do not do it unless you have datacenter
Ethernet switches and adapters. Those are the ones required for FCoE.
Basically unless you have per channel pause on your Ethernet fabric then
performance will at some point all go to shit.

So what happens is your NSD makes a whole bunch of requests to read
blocks off the storage array. Requests are small, response is not. The
response can overwhelms the Ethernet channel at which point performance
falls through the floor. Now you might be lucky not to see this,
especially if you have say have 10Gbps links from the storage and 40Gbps
links to the NSD servers, but you are taking a gamble. Also the more
storage arrays you have the more likely you are to see the problem.

To fix this you have two options. The first is datacenter Ethernet with
per channel pause. This option is expensive, probably in the same ball
park as fibre channel. At least it was last time I looked, though this
was some time ago now.

The second option is dedicated links between the storage array and the
NSD server. That is the cable goes directly between the storage array
and the NSD server with no switches involved. This option is a
maintenance nightmare.

At he site where I did this, we had to go option two because I need to
make it work, We ended up ripping it all out are replacing with FC.

Personally I would see what price you can get DSS storage for, or use
SAS arrays.

Note iSCSI can in theory work, it's just the issue with GPFS scattering
stuff to the winds over multiple storage arrays so your ethernet channel
gets swamped and standard ethernet pauses all the upstream traffic. The
vast majority of iSCSI use cases don't see this effect.

There is a reason that to run FC over ethernet they had to turn ethernet
lossless.


JAB.

--
Jonathan A. Buzzard                         Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
 

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to