Thanks for sharing! Kurt.
Yes. I have read the article you mentioned. But I also read another one:
http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices.
It uses LIO, which is the current standard Linux kernel SCSI target.
There is another doc in the ceph site:
http://ceph.com/w/index.php?title=ISCSI&redirect=no
I don't quite understand how the multi path works here. Are the two ISCSI
targets on the same system or two different ones?
Has anybody tried this already?
------------------ Original ------------------
From: "Kurt Bauer"<[email protected]>;
Date: Tue, Jun 18, 2013 03:52 PM
To: "Da Chun"<[email protected]>;
Cc: "ceph-users"<[email protected]>;
Subject: Re: [ceph-users] ceph iscsi questions
Hi,
Da Chun schrieb: Hi List,
I want to deploy a ceph cluster with latest cuttlefish, and export it with
iscsi interface to my applications.
Some questions here:
1. Which Linux distro and release would you recommend? I used Ubuntu 13.04
for testing purpose before.
For the ceph-cluster or the "iSCSI-GW"? We use Ubuntu 12.04 LTS for the
cluster and the iSCSI-GW, but tested Debian wheezy as iSCSI-GW too. Both work
flawless.
2. Which iscsi target is better? LIO, SCST, or others?
Have you read http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/ ?
That's what we do and it works without problems so far.
3. The system for the iscsi target will be a single point of failure. How
to eliminate it and make good use of ceph's nature of distribution?
That's a question we asked aourselves too. In theory one can set up 2
iSCSI-GW and use multipath but what does that do to the cluster? Will smth.
break if 2 iSCSI targets use the same rbd image in the cluster? Even if I use
failover-mode only?
Has someone already tried this and is willing to share their knowledge?
Best regards,
Kurt
Thanks!
_______________________________________________ ceph-users mailing list
[email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com