On 11/24/2009 06:07 PM, Chris K. wrote:
> Hello,
> I'm writing in regards to the performance with open-iscsi on a
> 10gbe network. On your website you posted performance results
> indicating you reached read and write speeds of 450 MegaBytes per
> second.
>
> In our environment we use Myricom
On Tue, Nov 24, 2009 at 08:07:12AM -0800, Chris K. wrote:
> Hello,
> I'm writing in regards to the performance with open-iscsi on a
> 10gbe network. On your website you posted performance results
> indicating you reached read and write speeds of 450 MegaBytes per
> second.
>
> In our environme
Shachar f, on 11/25/2009 07:57 PM wrote:
> I'm running open-iscsi with scst on Broadcom 10Gig network and facing
> write latency issues.
> When using netperf over an idle network the latency for a single block
> round trip transfer is 30 usec and with open-iscsi it is 90-100 usec.
>
> I see tha
The dd command I am running is time dd if=/dev/zero bs=1024k of=/mnt/
iscsi/10gfile.txt count=10240
My fs is xfs (mkfs.xfs -d agcount=8 -l internal,size=128m -n size=8k -
i size=2048 /dev/sdb1 -f) those are the parameters used to format the
drive.
Here are the top values: Cpu(s): 0.0%us, 6.1%sy,
I'm running open-iscsi with scst on Broadcom 10Gig network and facing write
latency issues.
When using netperf over an idle network the latency for a single block round
trip transfer is 30 usec and with open-iscsi it is 90-100 usec.
I see that Nagle (TCP_NODELAY) is disabled when openning socket o
Thank you for your response. The SAN is a 10gbe Nimbus with I believe
to be iscsitarget(http://iscsitarget.sourceforge.net/) as it's target
server.
The switch is a Cisco Nexus5010 set to jumbo frame and flow control.
We have through tcp/ip performance tests in conjunction with Cisco
proved that thi
Here is the dd command : time dd if=/dev/zero bs=1024k of=/mnt/iscsi/
10gfile.txt count=10240
Here are the cpu values :
Cpu(s): 0.0%us, 8.7%sy, 0.0%ni, 25.0%id, 64.0%wa, 0.4%hi,
1.9%si, 0.0%st -> Client
Cpu(s): 0.6%us, 2.8%sy, 0.0%ni, 86.4%id, 9.7%wa, 0.0%hi,
0.4%si, 0.0%st -> SAN
I h
Ricky wrote:
> sda: got wrong page
You mean this right? The linux scsi layer was trying to figure out the
cache type. It got an unexpected answer and so ...
> sda: assuming drive cache: write through
it used the default of write through cache.
> sd 6:0:0:0: Attached scsi disk sda
> sd 6:0:0:
Yangkook Kim wrote:
> Thanks for your patch.
>
> I tested your patch and it worked fine.
>
> So, next you will upload this patch to the git tree
> and the patch will become the part of source code
> in the next release of open-iscsi.
>
> Is my understanding correct?
Yeah.
I merged it and uploa
Boaz Harrosh wrote:
> On 11/24/2009 06:07 PM, Chris K. wrote:
>> Hello,
>> I'm writing in regards to the performance with open-iscsi on a
>> 10gbe network. On your website you posted performance results
>> indicating you reached read and write speeds of 450 MegaBytes per
>> second.
>>
>> In our
This information will also come out when I fdisk /dev/sda. And I can not
mkfs this disk.
I think the cache type should be write back.
But I do not how to handle this situation.
2009/11/26 Mike Christie
> Ricky wrote:
> > sda: got wrong page
>
> You mean this right? The linux scsi layer was try
be2iscsi can store its ip address in firmware/flash so there is no need
to have to set one in a iface. Just use whatever is in firmware/flash.
--
You received this message because you are subscribed to the Google Groups
"open-iscsi" group.
To post to this group, send email to open-is...@googleg
On Wed, Nov 25, 2009 at 5:57 PM, Shachar f wrote:
> I'm running open-iscsi with scst on Broadcom 10Gig network and facing write
> latency issues.
> When using netperf over an idle network the latency for a single block round
> trip transfer is 30 usec and with open-iscsi it is 90-100 usec.
>
> I s
On 25 Nov 2009 at 14:15, Chris K. wrote:
> Here are the cpu values :
> Cpu(s): 0.0%us, 8.7%sy, 0.0%ni, 25.0%id, 64.0%wa, 0.4%hi,
A note: I don't know how well open-iscsi uses multiple threads, but looking at
individual CPUs may be interesting, as the above is only an average for
multiple
C
14 matches
Mail list logo