Hi DragonflyBSD, The newly arriving NVM Express PCIe SSDs seem to be rocking an interesting feature: they can be configured to run a single PCIe set of links, or they can be configured in a dual-port mode.
The obvious and easy use case for these systems is to give each socket in a two-socket system it's own link to the SSD, giving some systems level failure and a direct path for each CPU. My question to the list is broad: how, if at all, could Hammer2 be worked to allow two separate systems to share the same storage device? I'd guess that the "simple" case might look very similar to mirroring today, except only transaction meta-data would need to get passed to a mirror, as it already has access to the data, it just needs to know about the changes. Would the mirror have to be constrained to a stable snapshot as well, or is there nothing to fear with regards to the mirror reading data from the device the master is actively writing to- will the data that was there however many ms ago be there, or is there a need to fret about the consistency between the mirror and the master as the mirror goes to read? Hopefully I'm asking the right questions. Please feel free to address or disregard any specifics: I'd love some sharing on what may be possible, and where the snags are. We're going from 0.55GBps drives to 4GBps+ drives any day now, that's exciting, and high availability computing that can share that single drive is really exciting to me, and I hope I've wetted some appetites in Hammer-land. I tend to believe Hammer is in a good position to tap this capability, and thought I'd raise the notion. Thanks for the ongoing work of excellence, Dragonfly, keep on! -rektide
