To further John's comments, Breakthrough Listen has a completely separate set 
of HPC servers installed at Green Bank that are connected to the same Ethernet 
switch as the ROACH2s and the VEGAS spectrometer's HPC servers.  When using the 
BL backend, up to 4 of the 8 ROACH2s are configured to send data the the BL 
backends rather than the VEGAS backends and that all works fine.  There is a 
vision to multicast the UDP packets so that BL and VEGAS can process the same 
IF signal, but we haven't gone down that path yet.  One thing to be careful of 
is ensuring that no system spams the network (e.g. by sending packets to an MAC 
address that is down resulting in the switch sending the packets to all 
connected devices), but that is not very onerous.  IMHO, maintaining a large 
shared environment is easier than maintaining several smaller environments, 
plus the upstream analog stuff can all be shared rather than having to 
duplicate/split it for multiple digital systems.

I guess the executive summary is that sharing the hardware with experiments 
that will not run simultaneously is relatively straightforward.  Sharing the 
hardware output to multiple back ends simultaneously is a bit trickier but 
still possible by using multicast (provided that the FPGA functionality is 
suitable for all consumers).

Dave

> On Jan 29, 2018, at 10:20, John Ford <jmfor...@gmail.com> wrote:
> 
> Hi Tom.
> 
> I think this is reasonably easy to manage.  At Green Bank, the spectrometer 
> consists of 8 ROACH-2s that are all reprogrammed for different observing 
> modes.  The modes are personalities stored on disk and loaded on command.  It 
> works fine.  You do have to coordinate to make sure only one computer is 
> commanding things.  If you're not hitting the wall performace-wise, rebooting 
> your control computer into different virtual machines is an interesting way 
> to make sure you don't get wrapped around that particular axle.  We never 
> attempted to run stuff on a virtual machine because we were trying to wring 
> all the performance we could out of our 10 gig ports.  It would be an 
> interesting experiment to see how much running in a VM costs in 
> performance...  
> 
> Managing the PFGA personalities is easy, I think.  Managing the software is 
> probably pretty easy as well if you package things up and have scripts to 
> start and stop the different experiments.
> 
> John
> 
> 
> On Mon, Jan 29, 2018 at 12:17 AM, Jason Manley <jman...@ska.ac.za 
> <mailto:jman...@ska.ac.za>> wrote:
> Hi Tom
> 
> We switch firmware around on our boards regularly (~20min observation 
> windows) on KAT-7 and MeerKAT. But we maintain control of the various 
> bitstreams ourselves, and manage the boards internally.
> 
> There is a master controller which handles allocation of processing nodes to 
> various projects, and loads the appropriate firmware onto those boards for 
> their use. The master controller has a standardised KATCP external-facing 
> interface. But we write to the registers on the FPGAs ourselves -- ie only 
> this controller has direct, raw access to the FPGAs. This "master controller" 
> software process kills and starts separate software sub-processes as needed 
> in order to run the various instruments. Sometimes they operate 
> simultaneously by sub-dividing the available boards into separate resource 
> pools.
> 
> We have one special case where two completely independent computers access 
> the hardware. We use this for internal testing and verification. But I 
> wouldn't use that for a deployed, science-observation system due to risk of 
> collisions. You'd have to implement some sort of semaphore/lock/token system, 
> which would require co-ordination between the computers. To me, that seems 
> like a complicated effort for such a simple task.
> 
> Jason Manley
> Functional Manager: DSP
> SKA-SA
> 
> 
> On 28 Jan 2018, at 21:04, Tom Kuiper <kui...@jpl.nasa.gov 
> <mailto:kui...@jpl.nasa.gov>> wrote:
> 
> > I'm interested in experience people have had using the same equipment 
> > installed on a telescope for different projects using different firmware 
> > and software.  Have there been issues with firmware swapping?  Are there 
> > software issues that cannot be managed by using a different control 
> > computer or a virtual environment in the same controller?  In addition to 
> > your experience I'd like a summary opinion: yes, it can be done without 
> > risking observations, or no, better to have separate hardware.
> >
> > Many thanks and best regards,
> >
> > Tom
> >
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "casper@lists.berkeley.edu <mailto:casper@lists.berkeley.edu>" group.
> > To unsubscribe from this group and stop receiving emails from it, send an 
> > email to casper+unsubscr...@lists.berkeley.edu 
> > <mailto:casper%2bunsubscr...@lists.berkeley.edu>.
> > To post to this group, send email to casper@lists.berkeley.edu 
> > <mailto:casper@lists.berkeley.edu>.
> 
> --
> You received this message because you are subscribed to the Google Groups 
> "casper@lists.berkeley.edu <mailto:casper@lists.berkeley.edu>" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to casper+unsubscr...@lists.berkeley.edu 
> <mailto:casper%2bunsubscr...@lists.berkeley.edu>.
> To post to this group, send email to casper@lists.berkeley.edu 
> <mailto:casper@lists.berkeley.edu>.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "casper@lists.berkeley.edu" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to casper+unsubscr...@lists.berkeley.edu 
> <mailto:casper+unsubscr...@lists.berkeley.edu>.
> To post to this group, send email to casper@lists.berkeley.edu 
> <mailto:casper@lists.berkeley.edu>.

-- 
You received this message because you are subscribed to the Google Groups 
"casper@lists.berkeley.edu" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to casper+unsubscr...@lists.berkeley.edu.
To post to this group, send email to casper@lists.berkeley.edu.

Reply via email to