On Thu, Apr 18, 2019 at 3:47 PM Wesley Dillingham
<[email protected]> wrote:
>
> I am trying to determine some sizing limitations for a potential iSCSI 
> deployment and wondering whats still the current lay of the land:
>
> Are the following still accurate as of the ceph-iscsi-3.0 implementation 
> assuming CentOS 7.6+ and the latest python-rtslib etc from shaman:
>
> Limit of 4 gateways per cluster (source: 
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/block_device_guide/using_an_iscsi_gateway#requirements)

Yes -- at least that's what's tested. I don't know of any immediate
code-level restrictions, however. You can technically isolate a
cluster of iSCSI gateways by configuring them to access their
configuration from different pools. Of course, then things like the
Ceph Dashboard won't work correctly.

> Limit of 256 LUNS per target (source: 
> https://github.com/ceph/ceph-iscsi-cli/issues/84#issuecomment-373359179 ) 
> there is mention of this being updated in this comment: 
> https://github.com/ceph/ceph-iscsi-cli/issues/84#issuecomment-373449362 per 
> an update to rtslib but I still see the limit as 256 here: 
> https://github.com/ceph/ceph-iscsi/blob/master/ceph_iscsi_config/lun.py#L984 
> wondering if this is just an outdated limit or there is still valid reason to 
> limit the number of LUNs per target

Still a limit although it could possibly be removed. Until recently,
it was painfully slow to add hundreds of LUNs, so assuming that has
been addressed, perhaps this limit could be removed -- it just makes
testing harder.

> Limit of 1 target per cluster: 
> https://github.com/ceph/ceph-iscsi-cli/issues/104#issuecomment-396224922

SUSE added support for multiple targets per cluster.

>
> Thanks in advance.
>
>
>
>
>
> Respectfully,
>
> Wes Dillingham
> [email protected]
> Site Reliability Engineer IV - Platform Storage / Ceph
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to