On 2019/3/25 14:36, Vijay Bellur wrote:
Hi Xiubo,
On Fri, Mar 22, 2019 at 5:48 PM Xiubo Li <[email protected]
<mailto:[email protected]>> wrote:
On 2019/3/21 11:29, Xiubo Li wrote:
All,
I am one of the contributor forgluster-block
<https://github.com/gluster/gluster-block>[1] project, and also I
contribute to linux kernel andopen-iscsi
<https://github.com/open-iscsi> project.[2]
NBD was around for some time, but in recent time, linux kernel’s
Network Block Device (NBD) is enhanced and made to work with more
devices and also the option to integrate with netlink is added.
So, I tried to provide a glusterfs client based NBD driver
recently. Please refergithub issue #633
<https://github.com/gluster/glusterfs/issues/633>[3], and good
news is I have a working code, with most basic things @nbd-runner
project <https://github.com/gluster/nbd-runner>[4].
This is nice. Thank you for your work!
As mentioned the nbd-runner(NBD proto) will work in the same layer
with tcmu-runner(iSCSI proto), this is not trying to replace the
gluster-block/ceph-iscsi-gateway great projects.
It just provides the common library to do the low level stuff,
like the sysfs/netlink operations and the IOs from the nbd kernel
socket, and the great tcmu-runner project is doing the sysfs/uio
operations and IOs from the kernel SCSI/iSCSI.
The nbd-cli tool will work like the iscsi-initiator-utils, and the
nbd-runner daemon will work like the tcmu-runner daemon, that's all.
Do you have thoughts on how nbd-runner currently differs or would
differ from tcmu-runner? It might be useful to document the
differences in github (or elsewhere) so that users can make an
informed choice between nbd-runner & tcmu-runner.
Yeah, this makes sense and I will figure it out in the github. Currently
for the open-iscsi/tcmu-runner, there are already many existing tools to
help product it, and for NBD we may need to implement them, correct me
if I am wrong here :-)
In tcmu-runner for different backend storages, they have separate
handlers, glfs.c handler for Gluster, rbd.c handler for Ceph, etc.
And what the handlers here are doing the actual IOs with the
backend storage services once the IO paths setup are done by
ceph-iscsi-gateway/gluster-block....
Then we can support all the kind of backend storages, like the
Gluster/Ceph/Azure... as one separate handler in nbd-runner, which
no need to care about the NBD low level's stuff updates and changes.
Given that the charter for this project is to support multiple backend
storage projects, would not it be better to host the project in the
github repository associated with nbd [5]? Doing it that way could
provide a more neutral (as perceived by users) venue for hosting
nbd-runner and help you in getting more adoption for your work.
This is a good idea, I will try to push this forward.
Thanks very much Vijay.
BRs
Xiubo Li
Thanks,
Vijay
[5] https://github.com/NetworkBlockDevice/nbd
Thanks.
While this email is about announcing the project, and asking for
more collaboration, I would also like to discuss more about the
placement of the project itself. Currently nbd-runner project is
expected to be shared by our friends at Ceph project too, to
provide NBD driver for Ceph. I have personally worked with some
of them closely while contributing to open-iSCSI project, and we
would like to take this project to great success.
Now few questions:
1. Can I continue to usehttp://github.com/gluster/nbd-runneras
home for this project, even if its shared by other filesystem
projects?
* I personally am fine with this.
2. Should there be a separate organization for this repo?
* While it may make sense in future, for now, I am not planning
to start any new thing?
It would be great if we have some consensus on this soon as
nbd-runner is a new repository. If there are no concerns, I will
continue to contribute to the existing repository.
Regards,
Xiubo Li (@lxbsz)
[1] -https://github.com/gluster/gluster-block
[2] -https://github.com/open-iscsi
[3] -https://github.com/gluster/glusterfs/issues/633
[4] -https://github.com/gluster/nbd-runner
_______________________________________________
Gluster-users mailing list
[email protected] <mailto:[email protected]>
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected] <mailto:[email protected]>
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-devel mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-devel