Mark Lehrer m...@knm.org schrieb am 25.08.2014 um 20:58 in Nachricht
ximss-10382...@knm.org:
I am trying to achieve10Gbps in my single initiator/single target
env. (open-iscsi and IET)
On a semi-related note, are there any good guides out there to tuning Linux
for maximum single-socket
On 08/25/2014 12:27 PM, Wyllys Ingersoll wrote:
in iscsi_copy_operational_params, the conn-max_recv_dlength is set to
_padding (conn_conf-MaxRecvDataSegmentLength);
initiator_common.c:153 :: conn-max_recv_dlength =
__padding(conn_conf-MaxRecvDataSegmentLength);
This (_padding) rounds up
Mike Christie micha...@cs.wisc.edu schrieb am 26.08.2014 um 09:10 in
Nachricht 53fc32e0.9060...@cs.wisc.edu:
[...]
The attached patch should fix this. It just rounds down instead of
rounding up.
As round_down_on_pad_bound(unsigned int param) does not do any padding at
all, but alignment, why
Thanks for the tips, Im changing the fio settings to see if I get an
improvement, I will post results later.
Im mostly concerned with iscsi/rbd, I havent yet isolated iscsi by
itself to a file, though I have run tests using straight librados
(non-iscsi) that show that better performance IS
You are likely getting hit by the bandwidth-delay product.
Take a look at http://en.wikipedia.org/wiki/Bandwidth-delay_product
and http://www.kehlet.cx/articles/99.html
On 08/25/2014 02:58 PM, Mark Lehrer wrote:
I am trying to achieve10Gbps in my single initiator/single target
env.
The actual maximum is really 16777212 since 16777212 0x03 == 0,
which will cause it to pass thru the _padding function with no change.
Nice catch. Yeah, it is a bug in the iscsi code as far as I can tell. I
should be able to send a patch shortly.
Just curious, did you have a
Thanks Mike. I will use them and let you know.
On Tuesday, August 26, 2014 12:07:28 AM UTC-4, Mike Christie wrote:
There are 2 patches attached.
1. iscsi-tcp-export-local-port.patch.
This is required. Apply this to your kernel. When you login you will see
On 08/26/2014 07:55 AM, Wyllys Ingersoll wrote:
Im mostly concerned with iscsi/rbd, I havent yet isolated iscsi by
itself to a file, though I have run tests using straight librados
It is just easier to make sure iscsi is ok first since that is what we
are experts on here. If that is already
iperf performance for TCP is line rate in both directions using 3 threads
However, I can just get 700MB/s Write and 570MB/s Reads with iSCSI.
Thanks for any pointers!
On Tuesday, August 26, 2014 1:11:59 PM UTC-7, learner.study wrote:
Another related observation and some questions;
I am
I have a couple of iscsi links running on 1G and not in your range of hw
and demand at all.
I ran an ISP for about 20 years and got bitten by the BDP a number of
times now so when someone describes the problem I know what to look for.
On 08/26/2014 04:05 PM, Learner wrote:
How many iscsi
On Aug 26, 2014, at 3:11 PM, Learner learner.st...@gmail.com wrote:
Another related observation and some questions;
I am using open iscsi on init with IET on trgt over a single 10gbps link
There are three ip aliases on each side
I have 3 ramdisks exported by IET to init
I do iscsi
Hi Mike,
Thanks for suggestions
I think you meant,
echo 1 /sys/block/sdX/device/delete
I don't see /sys/block/sdX/device/remove in my setup.
How do following FIO options look?
[default]
rw=read
size=4g
bs=1m
ioengine=libaio
direct=1
numjobs=1
filename=/dev/sda
runtime=360
iodepth=256
On Aug 26, 2014, at 6:49 PM, Michael Christie micha...@cs.wisc.edu wrote:
On Aug 26, 2014, at 3:11 PM, Learner learner.st...@gmail.com wrote:
Another related observation and some questions;
I am using open iscsi on init with IET on trgt over a single 10gbps link
There are three ip
I am monitoring with netstat -a...looking at sendq and recvq there for the
three iscsi/tcp sessions.
Also checked with tcpdump.
Thanks!
Sent from my iPhone
On Aug 26, 2014, at 9:46 PM, Michael Christie micha...@cs.wisc.edu wrote:
On Aug 26, 2014, at 6:49 PM, Michael Christie
14 matches
Mail list logo