Thanks for the tips, Im changing the fio settings to see if I get an
improvement, I will post results later.

Im mostly concerned with iscsi/rbd, I havent yet isolated iscsi by
itself to a file, though I have run tests using straight librados
(non-iscsi) that show that better performance IS possible via rbd, but
Im trying to isolate the bottleneck between iscsi and tgt, but it may
end up being a combination of both.  The tgt rbd backend is not using
the asynchronous IO functions from librados, so I will also modify
that code to see if it improves things.  I've been talking to the tgt
developers separately about the same issue.

thanks,
 Wyllys



On Mon, Aug 25, 2014 at 9:39 PM, Mike Christie <micha...@cs.wisc.edu> wrote:
> Also see what Donald reccomends for increasing the iscsi and device
> queue depths. You will want the device and fio queue depths to be
> similar. For bs, you should use something like 256K. I think then you
> also want --iodepth_batch to be around the queue depth.
>
> For iscsi settings make sure they got negotiated for by running
>
> iscsiadm -m session -P 2
>
> after you login.
>
> On the tgt side, you would also want to increase the per session queue
> depth from 128 to whatever you set for node.session.cmds_max.
>
> Also remember, open-iscsi is a little odd in that you cannot just change
> the iscsid.conf settings and have them take affect the next login. You
> would have to do discovery then relogin or if you want to set a iscsi
> setting for a specific target portal you would do
>
> iscsiadm -m node -T target -p ip -o update -n
> mysetting_like_node.session.cmds_max -v 1024
>
>
> I did not see your reply about if iscsi alone was slow or just iscsi
> with rbd?
>
>
> On 08/25/2014 06:43 PM, Gruher, Joseph R wrote:
>> Try setting some queue depth, like 64. Not sure what FIO defaults to if not 
>> specified but if it is 1 that won't yield good performance.
>>
>>
>> -------- Original message --------
>> From: Wyllys Ingersoll
>> Date:08/25/2014 3:49 PM (GMT-08:00)
>> To: open-iscsi@googlegroups.com
>> Subject: Re: iscsi over RBD performance tips?
>>
>> Yes, using open-iscsi with tgt as the target side.
>>
>> I used fio with the following job file.  I only used 1 job (thread) because 
>> I want to see the max that a single job can read at a time.  Even by 
>> maximizing the MaxXmitDataSegmentLength and MaxRecvDataSegmentLength, I dont 
>> see much difference.
>>
>> [default]
>> rw=randread
>> size=20g
>> bs=16m
>> ioengine=libaio
>> direct=1
>> numjobs=1
>> filename=/dev/sdb
>> runtime=600
>> write_bw_log=iscsiread
>>
>>
>> Then I ran fio as follows:
>> $ fio iscsi.job
>>
>>
>>
>>
>>
>> On Friday, August 22, 2014 5:00:33 PM UTC-4, Mike Christie wrote:
>> Are you using linux for the initiator? If so, what is the throughput you get 
>> from just using this open-iscsi initiator connected to tgt with a ram disk?
>>
>> I just installed RBD here for work, so let me check it out. What io tool are 
>> using and if it is something like fio could you post the arguments you used 
>> to run it?
>>
>>
>> On Aug 21, 2014, at 4:10 PM, Wyllys Ingersoll <wyl...@gmail.com> wrote:
>>
>>
>> Im looking for suggestions about maximizing performance when using an RBD 
>> backend (Ceph) over a 10GB Ethernet link.  In my testing, I see the read 
>> throughput max out at about 100Mbyte/second for just about any block sizes 
>> above 4K (below 4K it becomes horribly slow) and write operations are about 
>> 40Mbyte/second.
>>
>> Using librados directly to read from the same backend pool/image yields much 
>> higher numbers, so the issue seems to be in the iscsi/bs_rbd backend.  
>> Regardless of the data sizes being read, the max thruput I am seeing is 
>> about 80% slower than using librados directly.
>>
>> Any suggestions would be much appreciated.
>>
>> thanks,
>>   Wyllys
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "open-iscsi" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to open-iscsi+...@googlegroups.com.
>> To post to this group, send email to open-...@googlegroups.com.
>> Visit this group at http://groups.google.com/group/open-iscsi.
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "open-iscsi" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to 
>> open-iscsi+unsubscr...@googlegroups.com<mailto:open-iscsi+unsubscr...@googlegroups.com>.
>> To post to this group, send email to 
>> open-iscsi@googlegroups.com<mailto:open-iscsi@googlegroups.com>.
>> Visit this group at http://groups.google.com/group/open-iscsi.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to a topic in the Google 
> Groups "open-iscsi" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/open-iscsi/UDnEeyzk4jo/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at http://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.

Reply via email to