in iscsi_copy_operational_params, the conn->max_recv_dlength is set to
_padding (conn_conf->MaxRecvDataSegmentLength);
initiator_common.c:153 :: conn->max_recv_dlength =
__padding(conn_conf->MaxRecvDataSegmentLength);
This (_padding) rounds up the value. If the configured
MaxRecvDataSegmentLengt
I am trying to achieve10Gbps in my single initiator/single target
env. (open-iscsi and IET)
On a semi-related note, are there any good guides out there to tuning Linux
for maximum single-socket performance? On my 40 gigabit setup, I seem to
hit a wall around 3 gigabits when doing a single TCP
I find upping some of the default Linux network params helps with
throughput
Edit /etc/sysctl.conf, then update the system using #sysctl –p
# Increase network buffer sizes net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 4096 6
On 08/25/2014 12:27 PM, Wyllys Ingersoll wrote:
> in iscsi_copy_operational_params, the conn->max_recv_dlength is set to
> _padding (conn_conf->MaxRecvDataSegmentLength);
>
> initiator_common.c:153 :: conn->max_recv_dlength =
> __padding(conn_conf->MaxRecvDataSegmentLength);
>
> This (_padding)
On 08/25/2014 03:31 PM, Donald Williams wrote:
> On a semi-related note, are there any good guides out there to tuning
> Linux for maximum single-socket performance? On my 40 gigabit setup, I
> seem to hit a wall around 3 gigabits when doing a single TCP socket. To
> go far above that I need to d
On Mon, 25 Aug 2014 15:48:02 -0500
Mike Christie wrote:
On 08/25/2014 03:31 PM, Donald Williams wrote:
On a semi-related note, are there any good guides out there to
tuning Linux for maximum single-socket performance?
What kernel are you using? Are you doing IO to one LU or multiple?
Single
Yes, using open-iscsi with tgt as the target side.
I used fio with the following job file. I only used 1 job (thread) because
I want to see the max that a single job can read at a time. Even by
maximizing the MaxXmitDataSegmentLength and MaxRecvDataSegmentLength, I
dont see much difference.
Try setting some queue depth, like 64. Not sure what FIO defaults to if not
specified but if it is 1 that won't yield good performance.
Original message
From: Wyllys Ingersoll
Date:08/25/2014 3:49 PM (GMT-08:00)
To: open-iscsi@googlegroups.com
Subject: Re: iscsi over RBD perfor
On 08/25/2014 04:40 PM, Mark Lehrer wrote:
> On Mon, 25 Aug 2014 15:48:02 -0500
> Mike Christie wrote:
>> On 08/25/2014 03:31 PM, Donald Williams wrote:
>> On a semi-related note, are there any good guides out there to
>> tuning Linux for maximum single-socket performance?
>>
>> What kernel are
Also see what Donald reccomends for increasing the iscsi and device
queue depths. You will want the device and fio queue depths to be
similar. For bs, you should use something like 256K. I think then you
also want --iodepth_batch to be around the queue depth.
For iscsi settings make sure they got
There are 2 patches attached.
1. iscsi-tcp-export-local-port.patch.
This is required. Apply this to your kernel. When you login you will see
/sys/class/iscsi_connection/connection1:0/local_port
cat /sys/class/iscsi_connection/connection1:0/local_port
57568
This would match what you see in nets
>>> "Mark Lehrer" schrieb am 25.08.2014 um 20:58 in Nachricht
:
>> > I am trying to achieve10Gbps in my single initiator/single target
>>> env. (open-iscsi and IET)
>
> On a semi-related note, are there any good guides out there to tuning Linux
> for maximum single-socket performance? On my 40
12 matches
Mail list logo