Hi Tom.

I think this is reasonably easy to manage.  At Green Bank, the spectrometer
consists of 8 ROACH-2s that are all reprogrammed for different observing
modes.  The modes are personalities stored on disk and loaded on command.
It works fine.  You do have to coordinate to make sure only one computer is
commanding things.  If you're not hitting the wall performace-wise,
rebooting your control computer into different virtual machines is an
interesting way to make sure you don't get wrapped around that particular
axle.  We never attempted to run stuff on a virtual machine because we were
trying to wring all the performance we could out of our 10 gig ports.  It
would be an interesting experiment to see how much running in a VM costs in
performance...

Managing the PFGA personalities is easy, I think.  Managing the software is
probably pretty easy as well if you package things up and have scripts to
start and stop the different experiments.

John


On Mon, Jan 29, 2018 at 12:17 AM, Jason Manley <jman...@ska.ac.za> wrote:

> Hi Tom
>
> We switch firmware around on our boards regularly (~20min observation
> windows) on KAT-7 and MeerKAT. But we maintain control of the various
> bitstreams ourselves, and manage the boards internally.
>
> There is a master controller which handles allocation of processing nodes
> to various projects, and loads the appropriate firmware onto those boards
> for their use. The master controller has a standardised KATCP
> external-facing interface. But we write to the registers on the FPGAs
> ourselves -- ie only this controller has direct, raw access to the FPGAs.
> This "master controller" software process kills and starts separate
> software sub-processes as needed in order to run the various instruments.
> Sometimes they operate simultaneously by sub-dividing the available boards
> into separate resource pools.
>
> We have one special case where two completely independent computers access
> the hardware. We use this for internal testing and verification. But I
> wouldn't use that for a deployed, science-observation system due to risk of
> collisions. You'd have to implement some sort of semaphore/lock/token
> system, which would require co-ordination between the computers. To me,
> that seems like a complicated effort for such a simple task.
>
> Jason Manley
> Functional Manager: DSP
> SKA-SA
>
>
> On 28 Jan 2018, at 21:04, Tom Kuiper <kui...@jpl.nasa.gov> wrote:
>
> > I'm interested in experience people have had using the same equipment
> installed on a telescope for different projects using different firmware
> and software.  Have there been issues with firmware swapping?  Are there
> software issues that cannot be managed by using a different control
> computer or a virtual environment in the same controller?  In addition to
> your experience I'd like a summary opinion: yes, it can be done without
> risking observations, or no, better to have separate hardware.
> >
> > Many thanks and best regards,
> >
> > Tom
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "casper@lists.berkeley.edu" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to casper+unsubscr...@lists.berkeley.edu.
> > To post to this group, send email to casper@lists.berkeley.edu.
>
> --
> You received this message because you are subscribed to the Google Groups "
> casper@lists.berkeley.edu" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to casper+unsubscr...@lists.berkeley.edu.
> To post to this group, send email to casper@lists.berkeley.edu.
>

-- 
You received this message because you are subscribed to the Google Groups 
"casper@lists.berkeley.edu" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to casper+unsubscr...@lists.berkeley.edu.
To post to this group, send email to casper@lists.berkeley.edu.

Reply via email to