On 26 Nov 2009 at 11:06, Chris K. wrote:
> I thought of posting the statistics for all cores but chose the sum
> instead but here are all the details :
>
> Client :
> Tasks: 98 total, 2 running, 96 sleeping, 0 stopped, 0 zombie
> Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%h
I thought of posting the statistics for all cores but chose the sum
instead but here are all the details :
Client :
Tasks: 98 total, 2 running, 96 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu1 : 0.0%us, 1.3%sy, 0.0%ni,
I thought of posting the statistics for all cores but chose the sum
instead but here are all the details :
Client :
Tasks: 98 total, 2 running, 96 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi,
0.0%si, 0.0%st
Cpu1 : 0.0%us, 1.3%sy, 0.0%ni,
I thought of posting the individuals core statistics but opted for the
sum but here are all the details during the dd transfer :
Client :
top - 05:33:59 up 5 days, 17:03, 2 users, load average: 0.46, 0.10,
0.03
Tasks: 98 total, 2 running, 96 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.0%us
I thought of posting the statistics for all cores but chose the sum
instead but here are all the details :
Client :
Tasks: 98 total, 2 running, 96 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.0%us, 1.3%sy, 0.0%ni,
On 25 Nov 2009 at 14:15, Chris K. wrote:
> Here are the cpu values :
> Cpu(s): 0.0%us, 8.7%sy, 0.0%ni, 25.0%id, 64.0%wa, 0.4%hi,
A note: I don't know how well open-iscsi uses multiple threads, but looking at
individual CPUs may be interesting, as the above is only an average for
multiple
C
Boaz Harrosh wrote:
> On 11/24/2009 06:07 PM, Chris K. wrote:
>> Hello,
>> I'm writing in regards to the performance with open-iscsi on a
>> 10gbe network. On your website you posted performance results
>> indicating you reached read and write speeds of 450 MegaBytes per
>> second.
>>
>> In our
Thank you for your response. The SAN is a 10gbe Nimbus with I believe
to be iscsitarget(http://iscsitarget.sourceforge.net/) as it's target
server.
The switch is a Cisco Nexus5010 set to jumbo frame and flow control.
We have through tcp/ip performance tests in conjunction with Cisco
proved that thi
Here is the dd command : time dd if=/dev/zero bs=1024k of=/mnt/iscsi/
10gfile.txt count=10240
Here are the cpu values :
Cpu(s): 0.0%us, 8.7%sy, 0.0%ni, 25.0%id, 64.0%wa, 0.4%hi,
1.9%si, 0.0%st -> Client
Cpu(s): 0.6%us, 2.8%sy, 0.0%ni, 86.4%id, 9.7%wa, 0.0%hi,
0.4%si, 0.0%st -> SAN
I h
The dd command I am running is time dd if=/dev/zero bs=1024k of=/mnt/
iscsi/10gfile.txt count=10240
My fs is xfs (mkfs.xfs -d agcount=8 -l internal,size=128m -n size=8k -
i size=2048 /dev/sdb1 -f) those are the parameters used to format the
drive.
Here are the top values: Cpu(s): 0.0%us, 6.1%sy,
On Tue, Nov 24, 2009 at 08:07:12AM -0800, Chris K. wrote:
> Hello,
> I'm writing in regards to the performance with open-iscsi on a
> 10gbe network. On your website you posted performance results
> indicating you reached read and write speeds of 450 MegaBytes per
> second.
>
> In our environme
On 11/24/2009 06:07 PM, Chris K. wrote:
> Hello,
> I'm writing in regards to the performance with open-iscsi on a
> 10gbe network. On your website you posted performance results
> indicating you reached read and write speeds of 450 MegaBytes per
> second.
>
> In our environment we use Myricom
12 matches
Mail list logo