Thanks everyone for the input. I've been trying not to move towards the 10 gb option if possible for a couple of reasons but I might have to bite the bullet.
I'm also going to look at another option which is with the mmap kernel I have access to the FPGA memory in /dev/roach/mem. I think it should be fairly easy to write a custom server that runs on the roach that collects the bram data from the mem and then buffers if for sending over ethernet via udp. This is probably very similar to what the UDP option in tcpborphserver does but I'll be able to control the amount of buffering I guess you could call it a replacement for tcpborphserver that is specific for my application. Cheers, Ross On Wed, Oct 16, 2013 at 6:01 AM, Marc Welz <[email protected]> wrote: > Hello > > The path FGA<->PPC<->ethernet<->python hasn't been > optimised for speed. So if you can get hold of a > PCI(e) CX4 ethernet card for your capture machine > then you will save yourself a considerable amount > of effort and get way better max transfer rates > > However: If this isn't an option, there are a > couple of things you can attempt. The warning > is that you will have to be(come) reasonably > handy with C. As Jason mentioned, you could > try larger read sizes and maybe have multiple > concurrent reads from different connections in an > effort to amortise the round trip time. > > If that doesn't work: tcpborphserver(2,3) > has an extension mechanism... you can run > callbacks which do things - and one option > is to run something which sends out your > data in UDP packets - that saves on encoding > time (no katcp escaping) and also doesn't > halve throughput on packet loss. It turns > out that in tcpborphserver2 there is an > optional mode which can be built which > is specific for a pocket correlator - this does > provide an example on how to implement udp > data dumping. The downside is that the > interface is not documented and there are > some differences between v2 and v3 > > Another option is to look at the linux > kernel driver for the network interface, > there was some weirdness with the EMAC > and also the PHY - so there may be > the option of tweaking and fixing things > to get better throughput > > regards > > marc -- Ross Williamson Research Scientist - Sub-mm Group California Institute of Technology 626-395-2647 (office) 312-504-3051 (Cell)

