On 11/13/2014 10:17 AM, David Moreau Simard wrote:
> Running into weird issues here as well in a test environment. I don't have a 
> solution either but perhaps we can find some things in common..
> 
> Setup in a nutshell:
> - Ceph cluster: Ubuntu 14.04, Kernel 3.16.7, Ceph 0.87-1 (OSDs with separate 
> public/cluster network in 10 Gbps)
> - iSCSI Proxy node (targetcli/LIO): Ubuntu 14.04, Kernel 3.16.7, Ceph 0.87-1 
> (10 Gbps)
> - Client node: Ubuntu 12.04, Kernel 3.11 (10 Gbps)
> 
> Relevant cluster config: Writeback cache tiering with NVME PCI-E cards (2 
> replica) in front of a erasure coded pool (k=3,m=2) backed by spindles.
> 
> I'm following the instructions here: 
> http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices
> No issues with creating and mapping a 100GB RBD image and then creating the 
> target.
> 
> I'm interested in finding out the overhead/performance impact of re-exporting 
> through iSCSI so the idea is to run benchmarks.
> Here's a fio test I'm trying to run on the client node on the mounted iscsi 
> device:
> fio --name=writefile --size=100G --filesize=100G --filename=/dev/sdu --bs=1M 
> --nrfiles=1 --direct=1 --sync=0 --randrepeat=0 --rw=write --refill_buffers 
> --end_fsync=1 --iodepth=200 --ioengine=libaio
> 
> The benchmark will eventually hang towards the end of the test for some long 
> seconds before completing.
> On the proxy node, the kernel complains with iscsi portal login timeout: 
> http://pastebin.com/Q49UnTPr and I also see irqbalance errors in syslog: 
> http://pastebin.com/AiRTWDwR
> 

You are hitting a different issue. German Anders is most likely correct
and you hit the rbd hang. That then caused the iscsi/scsi command to
timeout which caused the scsi error handler to run. In your logs we see
the LIO error handler has received a task abort from the initiator and
that timed out which caused the escalation (iscsi portal login related
messages).
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to