On 06/04/2017 09:42, Nick Fisk wrote:
>
> I assume Brady is referring to the death spiral LIO gets into with
> some initiators, including vmware, if an IO takes longer than about
> 10s. I haven’t heard of anything, and can’t see any changes, so I
> would assume this issue still remains.
>
>  
>
> I would look at either SCST or NFS for now.
>
LIO-TCMU+librbd-iscsi [1] [2] looks really promising and seams to be the
way to go. It would be great if somebody as insight about the maturity
of the project, is it ready for testing purposes ?

Cheers

Cédric

[1] https://ceph.com/planet/ceph-rbd-and-iscsi/
[2] https://github.com/open-iscsi/tcmu-runner
>
>  
>
> *From:*ceph-users [mailto:[email protected]] *On
> Behalf Of *Adrian Saul
> *Sent:* 06 April 2017 05:32
> *To:* Brady Deetz <[email protected]>; ceph-users <[email protected]>
> *Subject:* Re: [ceph-users] rbd iscsi gateway question
>
>  
>
>  
>
> I am not sure if there is a hard and fast rule you are after, but
> pretty much anything that would cause ceph transactions to be blocked
> (flapping OSD, network loss, hung host) has the potential to block RBD
> IO which would cause your iSCSI LUNs to become unresponsive for that
> period.
>
>  
>
> For the most part though, once that condition clears things keep
> working, so its not like a hang where you need to reboot to clear it. 
> Some situations we have hit with our setup:
>
>  
>
>   * Failed OSDs (dead disks) – no issues
>   * Cluster rebalancing – ok if throttled back to keep service times down
>   * Network packet loss (bad fibre) – painful, broken communication
>     everywhere, caused a krbd hang needing a reboot
>   * RBD Snapshot deletion – disk latency through roof, cluster
>     unresponsive for minutes at a time, won’t do again.
>
>  
>
>  
>
>  
>
> *From:*ceph-users [mailto:[email protected]] *On
> Behalf Of *Brady Deetz
> *Sent:* Thursday, 6 April 2017 12:58 PM
> *To:* ceph-users
> *Subject:* [ceph-users] rbd iscsi gateway question
>
>  
>
> I apologize if this is a duplicate of something recent, but I'm not
> finding much. Does the issue still exist where dropping an OSD results
> in a LUN's I/O hanging?
>
>  
>
> I'm attempting to determine if I have to move off of VMWare in order
> to safely use Ceph as my VM storage.
>
> Confidentiality: This email and any attachments are confidential and
> may be subject to copyright, legal or some other professional
> privilege. They are intended solely for the attention and use of the
> named addressee(s). They may only be copied, distributed or disclosed
> with the consent of the copyright owner. If you have received this
> email by mistake or by breach of the confidentiality clause, please
> notify the sender immediately by return email and delete or destroy
> all copies of the email. Any confidentiality, privilege or copyright
> is not waived or lost because this email has been sent to you by mistake.
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to