> does anyone have serious plans to use more than one Sinara crate? Absolutely. One of my primary motives for supporting DRTIO is coordination of multiple crates. Use case is ARTIQ coordinating entanglement distribution between a pair of qubit systems (each with its own crate) separated by an optical fiber. Future (>3 years) is same but for >2 nodes. See the following paper for one approach using trapped ion qubits.
http://iontrap.umd.edu/wp-content/uploads/2015/06/NphysModNet2015.pdf > Multi-crate configurations require slightly complicated gateware support for > DRTIO switches. >The bandwidth between crates will also be limited to 10Gbps. 10 Gb/s doesn't strike me as a limitation for the foreseeable future. > Crossing each switch will incur 100ns-200ns of latency This has implications for some experiments. 10 m (10 km) fiber propagation is 48 ns (48 us). Demonstration experiments involving heralded entanglement of a pair of nodes (2 crates) have a low probability of success (~1e-6) and are repeated continuously (~1 MHz). With each rep the success (or failure) is typically reported to nodes so the rep rate is limited by node-node communication latency. Adding an additional 200 ns for DRTIO routing in the case of 10 m separation is a significant added cost and may preclude the use of ARTIQ for early experiments (eg defect qubits where T2~ 10's us). For larger node separation it is fractionally smaller. >There is currently a plan to support multi-crate in the hardware (this >future-proofing simply >means adding some SFPs, essentially) but no plan to support it in the gateware. A current implementation using soft-core switching seems an adequate compromise provided the system is designed in such a way that a future gateware implementation is straight forward. > 1) slower response times. > 2) blocking the kernel CPU by twice the latency (round-trip) when it needs to > enquire about the space available in a remote RTIO FIFO. Any implementation that requires round-trip communication to complete DRTIO is very bad due to fiber/free-space propagation delays. To first order all DRTIO should assume receiving devices are ready to receive and handle errors by a) reporting to master crate b) logging for post-processing. To second order it's fine for low-traffic advisory signaling like "FIFO 80% full." Plan for future deployments where communication propagation delays are 100's us. In anticipation of a future all-gateware implementation of DRTIO routing is use of a dedicated soft-core CPU helpful? > DRTIO switch support is also beneficial to the serial protocol between the > Sayma AMC and > Sayma RTM FPGAs - the current plan is to use a dumb protocol that doesn't > have good timing >resolution and is inefficient for things like SPI transfers, essentially a >more open version of Channel >Link II. In this case you're relying on in-crate timing distribution. Seems fine. _______________________________________________ ARTIQ mailing list https://ssl.serverraum.org/lists/listinfo/artiq
