Hi Bruno,
I’ll respond to the questions inline.

Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research & Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.com<mailto:ppoul...@microsoft.com> | Tel: +1(857) 453 6436

From: Bruno Oliveira ~lychinus [mailto:brunnop.olive...@gmail.com]
Sent: Wednesday, June 26, 2013 10:00 AM
To: Peter Pouliot
Subject: [Openstack][Cinder][Hyper-V] iSCSI dealing in a High-Throughput Network

Hello Peter, how are you doing?

Excuse-me for the sudden email, but there's something very very important that 
I'd like to ask your advice for, if you don't mind.

We're (at MANDIC) are now dealing with what would be the best protocol for when 
dealing with Cinder (Block Storage): NFS or iSCSI.

ISCSI is the best option.

As far as I've read, iSCSI is extremely resilient and more reliable than NFS 
since it already address issues like network faults, by using multiple channels 
as individual paths to make sure the data reaches its targets. On the other 
hand, NFS would require the infrastructure itself to guarantee the network 
connectivity.

NFS exports a filesystem and require a nfs client at the OS layer.

ISCSI can be consume directly via hardware and exports block devices.

That for a production use is very important indeed, but I cannot forget of the 
performance between the two (I've also got to know that NFS might have a 
superior read IO, due to its read_cache, but it lacks when it comes to writing 
-- unless I have some sort of write cache, like the one deployed in the ZFS 
filesystem).

So once again it depends on how you want to use it.   NFS requires some sort of 
distributed filesystem under it to scale.
Currently with ISCSI we just just plug in storage nodes, and don’t care about 
filesystems.

Note: we have a Sun/Oracle Storage using the ZFS filesystem for what we're 
using currently.

Question 1) In any case, I'd like to know your thoughts on it. I'm not sure 
myself if it would be somewhat possible to have Hyper-V (even 2012) using a 
NFS, is it possible at all ?

So Hyper-V itself uses the native ISCSI client.   (You need to start the iscsi 
initiator service).
And that’s it.   It can by default pass cinder iscsi volumes.

In terms of comsuming NFS, there is also a native nfs client.   That is a 
feature that must be installed, and I’m not entirely sure if it’s present or 
available on hyper-v server.  You may need to use a full server sku for that.

Now that being said in theory work could be done to have the cinder client work 
natively as long as that feature is present however currently it only supports 
iscsi.


Question 2) Performance-wise, in a very high-throughput network (supposely 
10G), would iSCSI perform better than other alternatives ? (I've read a lot on 
the internet, but as you know, I'm not sure how practical these articles can be 
or if they're just comparing them theorically)

I personally ran ISCSI for years over 1G with out incident for production 
workloads with linux clusters.
That being said, we still did dedicate interfaces for storage traffic.

Thank you very much, Peter.

Best regards.

--

​​
Bruno Oliveira
Developer, Software Engineer
irc: lychinus | skype: brunnop.oliveira
brunnop.olive...@gmail.com<mailto:brunnop.olive...@gmail.com>

[https://drive.google.com/uc?export=view&id=0B2UQ5KHL27SYR1B0UVJUOWs4dmc]<http://br.linkedin.com/in/brunnopoliveira>
 [https://drive.google.com/uc?export=view&id=0B2UQ5KHL27SYYWxKTlU4ZlBueGc] 
<http://twitter.com/lychinus>  
[https://drive.google.com/uc?export=view&id=0B2UQ5KHL27SYWUhoeTg2b3FaZDg] 
<http://gplus.to/lychinus>

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to