Hi Max,
just a sidenote: we are using a fork of RBDSR
(https://github.com/vico-research-and-consulting/RBDSR) to connect
XENServer 7.2 Community to RBDs directly using rbd-nbd.
After a bit of hacking this works pretty good: direct RBD Creation from
the storage repo, live Migration between xen-nodes and pool, migration
between pools
A few weeks we had two options to decide:
* Do pioneer work with the new and really fancy active/active
multipathed LIO TCMU-U infrastructure
o compile a bunch of software and create packages for ubuntu
("ceph-iscsi-config", "python-rtslib-bf", "a recent
linux-kernel", "ceph-iscsi-config", "ceph-iscsi-cli",
"ceph-iscsi-tools", " tcmu-runner", ...)
o get the setup stable, work on a complex setup with no real userbase
o deploy iscsi gateways on our 5 osd nodes and distribute workload
on some pairs of gateways
o have krbd load on osd nodes, add complexity on ceph-upgrades
o accept the limitations of iscsi, especially hacky edge-cases if
gateways go away and return
o write automation code for our centralized operations inventory
system which interfaces and manages the associations of xen vms
to iscsi storage repos, iscsi volumes, rbd images
* Do pioneer work on RBDSR
o improve the storage gluecode of RBDSR
o get the setup stable, work on a simpler setup with no userbase
o have real multipathed rbd access without limitations
o get a good performance, especially in overall-cluster-view
(especially LACP bandwith usage between xen servers and ceph osds)
o have librbd (rbd-nbd, ceph-client) workload on every xenserver dom-0
=> better scaling: more xen servers -> better overall performance
o utilize rbd cache (there is nothing comparable in XENServer 7.2
Community)
o use the capabilities of ceph to create snapshots, clone systems
What do you think about that?
Regards
Marc
Am 12.06.2018 um 21:03 schrieb Max Cuttins:
>
> Hi everybody,
>
> i have a running iSCSI-ceph environment that connect to XenServer 7.2.
> I have some dubts and rookie questions about iSCSI.
>
> 1) Xen refused to connect to iSCSI gateway since I didn't turn up
> multipath on Xen.
> To me it's ok. But Is it right say that multipath is much more than
> just a feature but it's a mandatory way to connect instead?
> Is this normal? I thought iSCSI multipath was back-compatible with
> singlepath one.
>
> 2) The connection accomplished correctly with multipath.
> I see on the XEN dashboard:
>
> *2 of 2 paths active* (2 iSCSI sessions)
>
> I read around that for now that the iSCSI gateway would have just an
> active/passive multipath.
> Is this already worked? :)
>
> 3) I see "optimized/not optmized" on my Ceph dashboard.
> This stand for?
>
> 4) Performance.
> I run a simple test (nothing of statistically proven), and I see these
> value:
>
> dd if=/dev/zero of=/iscsi-test/testfile bs=1G count=1 oflag=direct
> 1073741824 bytes (1.1 GB) copied, 6.72009 s, *160 MB/s*
>
> dd if=/dev/zero of=/ceph-test/testfile bs=1G count=1 oflag=direct
> 1073741824 bytes (1.1 GB) copied, 1.57821 s, *680 MB/s*
>
> Of course I expected a drop (due to overhead of iSCSI)... but this is
> 4x slower than direct client. Which It seems to me a little bit high.
> However... is this *more-or-less* what I should consider as expected
> drop in iSCSI, or this gap'll be lowered in future?
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com