Ceph is quite complicated, and I suspect you're going to run into serious issues trying to get RBD support in iPXE. You'd really need to implement a tiny RBD client, which sounds like it would be pretty complicated.

I'd suggest an alternative: Store the linux kernel and initramfs in Ceph Object Storage. iPXE can already boot from HTTP, so you can load your kernel/initrd via object storage, then let the kernel handle booting from the RBD.

I can't really help with how to get the XenServer initrd to support Ceph, but that seems like a far simpler process then updating iPXE to support Ceph.

On 5/13/2014 11:23 AM, Stephen Perkins wrote:

Hi all,

I have a goal in mind and I'm not entirely sure how to reach it. So... with this in mind, I thought I would discuss what I want as an end result and then ask if a certain approach may make sense.

End goal: Create a highly available (no single point of failure) scale out infrastructure for booting and running diskless XenServer hosts and lots of Guest Operating systems that have live migration capabilities. I want this with as few systems in place as is possible.

While most people will start down the iSCSI path, I am not entirely sure that this is the approach I would like to look at. The iPXE full iSCSI stack is awesome and provides great capabilities but it requires a lot of clustering work to make it highly available. This is compounded if you want a cluster with more than 2 nodes.

I am interested in using the ceph clustered storage system. This system already provides me with a highly available scale out solution and... once configured and working give me the highly available storage I want and integrates well with the Xen Clients.

But... the hard part is that I want to boot diskless XenServers from this ceph store. This is where:

                1) iPXE comes in

                2) My knowledge about ends

My thought is that I would boot a customized iPXE from a tiny USB DOM or a highly available DHCP/TFTP/PXEboot infrastructure. Once iPXE is running, I would like to boot directly from a ceph cluster volume instead of the more standard iSCSI volume.

So, I wanted to ask thoughts on whether it makes sense to try to develop another backend connectivity option to iPXE. I would like to look at adding a ceph/RBD option that will allow me to provide a list of IP addresses (and other needed config info) for the ceph cluster and allow me to mount a ceph store to boot from.

Then... I would have to address the problem of how to get an initrd for XenServer that would have the ceph modules available and allow me to boot a root file system from there.

Is iPXE the correct place to look to help provide this?

Is this a hugely monumental project... or just a monumental project (given that the ceph client code exists and is open source)...

I thought I would reach out here before I wrote to the ceph group. I my approach is embarrassingly wrong, please feel free to let me know!

Thanks,

- Steve



_______________________________________________
ipxe-devel mailing list
[email protected]
https://lists.ipxe.org/mailman/listinfo.cgi/ipxe-devel

_______________________________________________
ipxe-devel mailing list
[email protected]
https://lists.ipxe.org/mailman/listinfo.cgi/ipxe-devel

Reply via email to