Re: Problem diagnosing iscsi failure on the initiator

2010-06-14 Thread Michal Suchanek
On 13 June 2010 21:01, Mike Christie micha...@cs.wisc.edu wrote:
 On 06/12/2010 06:31 AM, Michal Suchanek wrote:

 Hello

 I tried to get an iscsi setup working so I installed iscsitarget and
 open-iscsi and tried to export a file as an iSCSI lun.

 After doing so I could log in with iscsiadm but I would not get any
 disks on the initiator.

 Later I discovered that I had a typo in the ietd.conf file and the lun
 was simply not exported giving a target with no luns available for the
 initiator.

 While it is true that the error is logged in kernel logs on the target
 machine I could not find any way to list the available luns on the

 iscsiadm -m session -P 3

Thanks, this command does print the luns if there are any.

However, it is not obvious from the documentation that it should print
these nor is it obvious from the output that there some part missing
when no luns are available.

It also does not work when I just add -P 3 to the discovery and node
commands which I use to log into the target and there is no obvious
reason why it should not.

Thanks

Michal

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet, 1.25 million IOPS update

2010-06-14 Thread Vladislav Bolkhovitin

Pasi Kärkkäinen, on 06/11/2010 11:26 AM wrote:

On Fri, Feb 05, 2010 at 02:10:32PM +0300, Vladislav Bolkhovitin wrote:

Pasi Kärkkäinen, on 01/28/2010 03:36 PM wrote:

Hello list,

Please check these news items:
http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo
http://www.infostor.com/index/blogs_new/dave_simpson_storage/blogs/infostor/dave_simpon_storage/post987_37501094375591341.html

1,030,000 IOPS over a single 10 Gb Ethernet link

Specifically, Intel and Microsoft clocked 1,030,000 IOPS (with 
512-byte blocks), and more than 2,250MBps with large block sizes (16KB 
to 256KB) using the Iometer benchmark


So.. who wants to beat that using Linux + open-iscsi? :)
I personally, don't like such tests and don't trust them at all. They  
are pure marketing. The only goal of them is to create impression that X  
(Microsoft and Windows in this case) is a super-puper ahead of the  
world. I've seen on the Web a good article about usual tricks used by  
vendors to cheat benchmarks to get good marketing material, but,  
unfortunately, can't find link on it at the moment.


The problem is that you can't say from such tests if X will also ahead  
of the world on real life usages, because such tests always heavily  
optimized for particular used benchmarks and such optimizations almost  
always hurt real life cases. And you hardly find descriptions of those  
optimizations as well as a scientific description of the tests themself.  
The results published practically only in marketing documents.


Anyway, as far as I can see Linux supports all the used hardware as well  
as all advance performance modes of it, so if one repeats this test in  
the same setup, he/she should get not worse results.


For me personally it was funny to see how MS presents in the WinHEC  
presentation  
(http://download.microsoft.com/download/5/E/6/5E66B27B-988B-4F50-AF3A-C2FF1E62180F/COR-T586_WH08.pptx) 
that they have 1.1GB/s from 4 connections. In the beginning of 2008 I  
saw a *single* dd pushing data on that rate over a *single* connection  
from Linux initiator to iSCSI-SCST target using regular Myricom hardware  
without any special acceleration. I didn't know how proud I must have  
been for Linux :).




It seems they've described the setup here:
http://communities.intel.com/community/wired/blog/2010/04/20/1-million-iop-article-explained

And today they seem to have a demo which produces 1.3 million IOPS!

1 Million IOPS? How about 1.25 Million!:
http://communities.intel.com/community/wired/blog/2010/04/22/1-million-iops-how-about-125-million


I'm glad for them. The only thing surprises me that none of the Linux 
vendors, including Intel itself, interested to repeat this test for 
Linux and fix possible found problems, if any. Ten years ago similar 
test about Linux TCP scalability limitations comparing with Windows 
caused massive reaction and great TCP improvements.


The way how to do the test is quite straightforward, starting from 
making for Linux similarly effective test tool as IOMeter on Windows 
[1]. Maybe, the lack of such tool scares the vendors away?


Vlad

[1] None of the performance measurement tools for Linux I've seen so 
far, including disktest (although I've not looked at newer (1-1.5 years) 
versions) and fio satisfied me for various reasons.


--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: mc/s - not yet in open-iscsi?

2010-06-14 Thread Vladislav Bolkhovitin

Nicholas A. Bellinger, on 06/11/2010 12:45 AM wrote:

On Thu, 2010-06-10 at 13:35 -0700, Nicholas A. Bellinger wrote:

On Thu, 2010-06-10 at 13:36 +0400, Vladislav Bolkhovitin wrote:

Christopher Barry, on 06/10/2010 03:09 AM wrote:

Greetings everyone,

Had a question about implementing mc/s using open-iscsi today. Wasn't
really sure exactly what it was. From googling about, I can't find any
references of people doing it with open-iscsi, although I see a few
references to people asking about it. Anyone know the status on that?
http://scst.sourceforge.net/mc_s.html. In short, there's no point in it 
worth implementation and maintenance effort.

Heh, this URL is a bunch of b*llshit handwaving because the iscsi-scst
target does not support the complete set of features defined by
RFC-3720, namely MC/S and ErrorRecoveryLevel=2, let alone asymmeteric
logical unit access (ALUA) MPIO.   Vlad, if you are so sure that MC/S is
so awful, why don't you put your money where your mouth is and start
asking these questions on the IETF IPS list and see what Julian Satran
(the RFC editor) has to say about them..?  


H...?



Btw, just for those following along, here is what MC/S and ERL=2 when
used in combination (yes, they are complementary) really do:

http://linux-iscsi.org/builds/user/nab/Inter.vs.OuterNexus.Multiplexing.pdf

Also, I should mention in all fairness that my team was the first to
implement both a Target and Initiator capable of MC/S and
ErrorRecoveryLevel=2 running on Linux, and the first target capable of
running MC/S from multiple initiator implementations.

Unfortuately Vlad has never implemented any of these features in either
a target or initiator, so really he is not in a position to say what is
'good' or what is 'bad' about MC/S.


One more personal attack and misleading (read: deceiving) half-truth? My 
article is a technical article, so if you see anything wrong in it, you 
are welcome to point out on that and correct me. But instead you prefer 
personal attacks.


If you want to call me an ignorant idiot who's talking about what he 
completely doesn't understand, don't forget to call the same the Linux 
SCSI maintainers who also dislike MC/S for the same reasons (basically, 
I've just elaborated them) and who also have not implemented MC/S 
anywhere (although I've almost done it in iSCSI-SCST, but stopped in 
time). Simply, one doesn't have to jump from a fifth floor window to 
know consequences of this move. (Interesting, if I'm not in position to 
say anything about MC/S, who is in the position?)


What's funny is that your link says basically the same as my article and 
rather supports it.


Regarding your team being the first, don't forget to also mention that 
later your implementation was rejected by the Linux community and 
open-iscsi was preferred instead.


Vlad

--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: mc/s - not yet in open-iscsi?

2010-06-14 Thread Vladislav Bolkhovitin

Raj, on 06/12/2010 03:17 AM wrote:

Nicholas A. Bellinger n...@... writes:


Btw, just for those following along, here is what MC/S and ERL=2 when
used in combination (yes, they are complementary) really do:

http://linux-iscsi.org/builds/user/nab/Inter.vs.OuterNexus.Multiplexing.pdf

Also, I should mention in all fairness that my team was the first to
implement both a Target and Initiator capable of MC/S and
ErrorRecoveryLevel=2 running on Linux, and the first target capable of
running MC/S from multiple initiator implementations.



But the end result is what? open-iSCSI still doesn't have the MC/S even though 
it is useful? 


The end result is that any driver level multipath, including MC/S, is 
forbidden in Linux to encourage developers and vendors to improve MPIO 
and not to try to workaround problems in it by their homebrewed 
multipath solutions [1]. As the result of this very smart policy, Linux 
MPIO is in a very good shape now. Particularly, it scales with more 
links quite well. In contrast, according to Microsoft's data linked in 
this list recently, Windows MPIO scales quite badly, but Linux MPIO 
scales similarly as Windows MC/S does [2]. (BTW, this is a good evidence 
that MC/S doesn't have any inherent performance advantage over MPIO.)


But we are on the Linux list, so we don't care about Windows' problems. 
Everybody are encouraged to use MPIO and, if have any problem with it, 
report it in the appropriate mailing lists.


Vlad

[1] Yes, MC/S is just a workaround apparently introduced by IETF 
committee to eliminate multipath problems they see in SCSI inside their 
_own_ protocol instead of to push T10 committee to make the necessary 
changes in SAM. Or, because a lack of acceptance of those problem from 
T10 committee. But I'm not familiar with so deep history, so can only 
speculate about it.


[2] The Windows MPIO limitations can well explain why Microsoft is the 
only OS vendor pushing MC/S: for them it's simpler to implement MC/S 
than to fix those MPIO scalability problems. Additionally, it could have 
a future marketing value for them: the improved MPIO scalability would 
be a big point to push customers to migrate to the new Windows version. 
But this is again just a vague speculation ground. We will see in the 
future.


--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet, 1.25 million IOPS update

2010-06-14 Thread guy keren

Vladislav Bolkhovitin wrote:

Pasi Kärkkäinen, on 06/11/2010 11:26 AM wrote:

On Fri, Feb 05, 2010 at 02:10:32PM +0300, Vladislav Bolkhovitin wrote:

Pasi Kärkkäinen, on 01/28/2010 03:36 PM wrote:

Hello list,

Please check these news items:
http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/ 

http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo 

http://www.infostor.com/index/blogs_new/dave_simpson_storage/blogs/infostor/dave_simpon_storage/post987_37501094375591341.html 



1,030,000 IOPS over a single 10 Gb Ethernet link

Specifically, Intel and Microsoft clocked 1,030,000 IOPS (with 
512-byte blocks), and more than 2,250MBps with large block sizes 
(16KB to 256KB) using the Iometer benchmark


So.. who wants to beat that using Linux + open-iscsi? :)
I personally, don't like such tests and don't trust them at all. 
They  are pure marketing. The only goal of them is to create 
impression that X  (Microsoft and Windows in this case) is a 
super-puper ahead of the  world. I've seen on the Web a good article 
about usual tricks used by  vendors to cheat benchmarks to get good 
marketing material, but,  unfortunately, can't find link on it at the 
moment.


The problem is that you can't say from such tests if X will also 
ahead  of the world on real life usages, because such tests always 
heavily  optimized for particular used benchmarks and such 
optimizations almost  always hurt real life cases. And you hardly 
find descriptions of those  optimizations as well as a scientific 
description of the tests themself.  The results published practically 
only in marketing documents.


Anyway, as far as I can see Linux supports all the used hardware as 
well  as all advance performance modes of it, so if one repeats this 
test in  the same setup, he/she should get not worse results.


For me personally it was funny to see how MS presents in the WinHEC  
presentation  
(http://download.microsoft.com/download/5/E/6/5E66B27B-988B-4F50-AF3A-C2FF1E62180F/COR-T586_WH08.pptx) 
that they have 1.1GB/s from 4 connections. In the beginning of 2008 
I  saw a *single* dd pushing data on that rate over a *single* 
connection  from Linux initiator to iSCSI-SCST target using regular 
Myricom hardware  without any special acceleration. I didn't know how 
proud I must have  been for Linux :).




It seems they've described the setup here:
http://communities.intel.com/community/wired/blog/2010/04/20/1-million-iop-article-explained 



And today they seem to have a demo which produces 1.3 million IOPS!

1 Million IOPS? How about 1.25 Million!:
http://communities.intel.com/community/wired/blog/2010/04/22/1-million-iops-how-about-125-million 



I'm glad for them. The only thing surprises me that none of the Linux 
vendors, including Intel itself, interested to repeat this test for 
Linux and fix possible found problems, if any. Ten years ago similar 
test about Linux TCP scalability limitations comparing with Windows 
caused massive reaction and great TCP improvements.


The way how to do the test is quite straightforward, starting from 
making for Linux similarly effective test tool as IOMeter on Windows 
[1]. Maybe, the lack of such tool scares the vendors away?


Vlad

[1] None of the performance measurement tools for Linux I've seen so 
far, including disktest (although I've not looked at newer (1-1.5 years) 
versions) and fio satisfied me for various reasons.


there's iometer agent (dynamo) for linux (but the official version has 
some one fundamental flow, which should be fixed - it doesn't use AIO 
properly) - you just need a windows desktop to launch the test, and run 
the dynamo agent on a linux machine.


there is also vdbench from sun.

--guy

--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: mc/s - not yet in open-iscsi?

2010-06-14 Thread Nicholas A. Bellinger
On Mon, 2010-06-14 at 23:44 +0400, Vladislav Bolkhovitin wrote:
 Raj, on 06/12/2010 03:17 AM wrote:
  Nicholas A. Bellinger n...@... writes:
  
  Btw, just for those following along, here is what MC/S and ERL=2 when
  used in combination (yes, they are complementary) really do:
 
  http://linux-iscsi.org/builds/user/nab/Inter.vs.OuterNexus.Multiplexing.pdf
 
  Also, I should mention in all fairness that my team was the first to
  implement both a Target and Initiator capable of MC/S and
  ErrorRecoveryLevel=2 running on Linux, and the first target capable of
  running MC/S from multiple initiator implementations.
 
  
  But the end result is what? open-iSCSI still doesn't have the MC/S even 
  though 
  it is useful? 
 
 The end result is that any driver level multipath, including MC/S, is 
 forbidden in Linux to encourage developers and vendors to improve MPIO 
 and not to try to workaround problems in it by their homebrewed 
 multipath solutions [1].

This is such pre multi-core pre FSB year 2005 agruement..

  As the result of this very smart policy, Linux 
 MPIO is in a very good shape now. Particularly, it scales with more 
 links quite well. In contrast, according to Microsoft's data linked in 
 this list recently, Windows MPIO scales quite badly, but Linux MPIO 
 scales similarly as Windows MC/S does [2]. (BTW, this is a good evidence 
 that MC/S doesn't have any inherent performance advantage over MPIO.)
 

Then why can't you produce any numbers for Linux or MSFT, hmmm..?

Just as a matter of record, back in 2005 it was proved that Linux/iSCSI
running with both MC/S *and* MPIO where complementary and improved
throughput by ~1 Gb/sec using the 1st generation (single core) AMD
Operton x86_64 chips on PCI-X 133 Mhz 10 Gb/sec with stateless TCP
offload:

http://www.mail-archive.com/linux-s...@vger.kernel.org/msg02225.html

 But we are on the Linux list, so we don't care about Windows' problems. 
 Everybody are encouraged to use MPIO and, if have any problem with it, 
 report it in the appropriate mailing lists.
 

You are completely missing the point and ignoring the 'bigger picture'
of what is going on in the iSCSI ecosystem.

Also, if you think that arguing against the transparency of the upstream
Linux/iSCSI fabric for complete RFC-3720 support with some unproven
conjecture is going to win you something here, you are complete wrong.  

Just because you have not done the work yourself to implement the
interesting RFC-3720 features does not mean you get to dictate (or
dictate to others on this list) what the future of Linux/iSCSI will be.

 Vlad
 
 [1] Yes, MC/S is just a workaround apparently introduced by IETF 
 committee to eliminate multipath problems they see in SCSI inside their 
 _own_ protocol instead of to push T10 committee to make the necessary 
 changes in SAM. Or, because a lack of acceptance of those problem from 
 T10 committee. But I'm not familiar with so deep history, so can only 
 speculate about it.

Complete and utterly wrong.  You might want to check your copy of SAM,
because a SCSI fabric is allowed to have multipath communication paths
and ports as long as the ordering is enforced for the I_T Nexus at the
Target port.

Again, you can speculate however you want without any MC/S
implementation experience,  but if you are ready to get serious with
your unproven conjecture, please take it to the IETF IPS list and we
will see what the fathers of iSCSI have to say about it.

 
 [2] The Windows MPIO limitations can well explain why Microsoft is the 
 only OS vendor pushing MC/S: for them it's simpler to implement MC/S 
 than to fix those MPIO scalability problems. Additionally, it could have 
 a future marketing value for them: the improved MPIO scalability would 
 be a big point to push customers to migrate to the new Windows version. 
 But this is again just a vague speculation ground. We will see in the 
 future.
 

Yeah sure, Intel, MSFT and Netapp claiming 1.25 million IOPs with MC/S
on 5500 series chipsets with 5600 32nm chips and demonstrating it
pubically all over the place for the last quarter is *really* a vague
speculation.

--nab


-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.