Mike Christie micha...@cs.wisc.edu schrieb am 27.08.2014 um 23:49 in
Nachricht 53fe5276.2060...@cs.wisc.edu:
On 08/27/2014 02:24 AM, Ulrich Windl wrote:
Learner Study learner.st...@gmail.com schrieb am 27.08.2014 um 02:13 in
Nachricht
On 08/28/2014 12:59 AM, Ulrich Windl wrote:
To delete a device just do
echo 1 /sys/block/sdX/device/delete
I think the confusing thing is that you don't see a delete in
/sys/block/sdX/device.
Not sure what you mean. I do:
ls /sys/block/sda/device/
block evt_media_change
On 08/28/2014 11:29 AM, Mike Christie wrote:
On 08/28/2014 12:59 AM, Ulrich Windl wrote:
To delete a device just do
echo 1 /sys/block/sdX/device/delete
I think the confusing thing is that you don't see a delete in
/sys/block/sdX/device.
Not sure what you mean. I do:
ls
Learner Study learner.st...@gmail.com schrieb am 27.08.2014 um 02:13 in
Nachricht
CAP8+hKW=HApS+=vxeaaibtbbd7yzndu4squt+84se99aglc...@mail.gmail.com:
Hi Mike,
Thanks for suggestions
I think you meant,
echo 1 /sys/block/sdX/device/delete
I don't see /sys/block/sdX/device/remove
On Tue, 26 Aug 2014 13:05:11 -0700
Learner learner.st...@gmail.com wrote:
How many iscsi and underlying top sessions are u using? If multiple,
pls check if all to sessions are being used.
Btw, what tuning did u perform to fix Tcp BDP issue?
I'm just doing netcat tests to/from /dev/shm at the
I had applied the tuning for my 10g link but didn't see much impact. Actually
for me tcp is already line rate with 2/3 threads but iscsi/fio read is around
5.5gbps only - with 3/4 fio threads. Perhaps the bottleneck is somewhere else...
Thanks!
Sent from my iPhone
On Aug 27, 2014, at 8:25 AM,
On 08/27/2014 02:24 AM, Ulrich Windl wrote:
Learner Study learner.st...@gmail.com schrieb am 27.08.2014 um 02:13 in
Nachricht
CAP8+hKW=HApS+=vxeaaibtbbd7yzndu4squt+84se99aglc...@mail.gmail.com:
Hi Mike,
Thanks for suggestions
I think you meant,
echo 1 /sys/block/sdX/device/delete
Mark Lehrer m...@knm.org schrieb am 25.08.2014 um 20:58 in Nachricht
ximss-10382...@knm.org:
I am trying to achieve10Gbps in my single initiator/single target
env. (open-iscsi and IET)
On a semi-related note, are there any good guides out there to tuning Linux
for maximum single-socket
You are likely getting hit by the bandwidth-delay product.
Take a look at http://en.wikipedia.org/wiki/Bandwidth-delay_product
and http://www.kehlet.cx/articles/99.html
On 08/25/2014 02:58 PM, Mark Lehrer wrote:
I am trying to achieve10Gbps in my single initiator/single target
env.
iperf performance for TCP is line rate in both directions using 3 threads
However, I can just get 700MB/s Write and 570MB/s Reads with iSCSI.
Thanks for any pointers!
On Tuesday, August 26, 2014 1:11:59 PM UTC-7, learner.study wrote:
Another related observation and some questions;
I am
I have a couple of iscsi links running on 1G and not in your range of hw
and demand at all.
I ran an ISP for about 20 years and got bitten by the BDP a number of
times now so when someone describes the problem I know what to look for.
On 08/26/2014 04:05 PM, Learner wrote:
How many iscsi
On Aug 26, 2014, at 3:11 PM, Learner learner.st...@gmail.com wrote:
Another related observation and some questions;
I am using open iscsi on init with IET on trgt over a single 10gbps link
There are three ip aliases on each side
I have 3 ramdisks exported by IET to init
I do iscsi
Hi Mike,
Thanks for suggestions
I think you meant,
echo 1 /sys/block/sdX/device/delete
I don't see /sys/block/sdX/device/remove in my setup.
How do following FIO options look?
[default]
rw=read
size=4g
bs=1m
ioengine=libaio
direct=1
numjobs=1
filename=/dev/sda
runtime=360
iodepth=256
On Aug 26, 2014, at 6:49 PM, Michael Christie micha...@cs.wisc.edu wrote:
On Aug 26, 2014, at 3:11 PM, Learner learner.st...@gmail.com wrote:
Another related observation and some questions;
I am using open iscsi on init with IET on trgt over a single 10gbps link
There are three ip
I am monitoring with netstat -a...looking at sendq and recvq there for the
three iscsi/tcp sessions.
Also checked with tcpdump.
Thanks!
Sent from my iPhone
On Aug 26, 2014, at 9:46 PM, Michael Christie micha...@cs.wisc.edu wrote:
On Aug 26, 2014, at 6:49 PM, Michael Christie
I find upping some of the default Linux network params helps with
throughput
Edit /etc/sysctl.conf, then update the system using #sysctl –p
# Increase network buffer sizes net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 4096
On 08/25/2014 04:40 PM, Mark Lehrer wrote:
On Mon, 25 Aug 2014 15:48:02 -0500
Mike Christie micha...@cs.wisc.edu wrote:
On 08/25/2014 03:31 PM, Donald Williams wrote:
On a semi-related note, are there any good guides out there to
tuning Linux for maximum single-socket performance?
What
Thanks Mike - That helped
On Saturday, August 23, 2014 2:41:01 AM UTC+5:30, Mike Christie wrote:
On Aug 22, 2014, at 12:07 PM, Redwood Hyd redwo...@gmail.com
javascript: wrote:
Hi All,
I am trying to achieve10Gbps in my single initiator/single target env.
(open-iscsi and IET)
I
On Aug 22, 2014, at 12:07 PM, Redwood Hyd redwood...@gmail.com wrote:
Hi All,
I am trying to achieve10Gbps in my single initiator/single target env.
(open-iscsi and IET)
I exported 3 Ramdisks, via 3 different IP aliases to initator, did three
iscsi logins , 3 mounts points and then 3
19 matches
Mail list logo