Thanks Glenn

I like the idea of copying from the file system. I am not nfs mounted, but I 
can scp.

I got this to work by splitting into smaller writes and using the offset= 
setting. It is slow but at least it
seems to work. Thanks for the email.



    def big_write(self,data):

        chunksize=1024*64

        ll=len(data)

        nchunks=ll/chunksize

        for k in range(nchunks):

            st=k*chunksize;
            ed=(k+1)*chunksize
            self.roach.write('dram_memory',data[st:ed],offset=st)


________________________________
From: G Jones [[email protected]]
Sent: Tuesday, April 08, 2014 4:04 PM
To: Madden, Timothy J.
Cc: [email protected]
Subject: Re: [casper] Problem writing to DRAM, ROACH 1

Hi,

I think I ran into similar issues, but I don't remember it being a consistent 
failure for a given size, just that large transfers were somewhat unreliable.
I used code like this:
 def _load_dram_katcp(self,data,tries=2):
        while tries > 0:
            try:
                self._pause_dram()
                self.r.write_dram(data.tostring())
                self._unpause_dram()
                return
            except Exception, e:
                print "failure writing to dram, trying again"
# print e
            tries = tries - 1
        raise Exception("Writing to dram failed!")

to help deal with such problems. But then I found I got more speed by 
generating the data as a file on the file system (since I was running the code 
on the same machine that hosts the ROACH NFS file system, so I could write the 
data to e.g. /srv/roach_boot/etch/boffiles/dram.bin) and then use the linux 
command dd on the roach to write the data to DRAM. The code looks like:

    def 
_load_dram_ssh(self,data,offset_bytes=0,roach_root='/srv/roach_boot/etch',datafile='boffiles/dram.bin'):
        offset_blocks = offset_bytes/512 #dd uses blocks of 512 bytes by default
        self._update_bof_pid()
        self._pause_dram()
        data.tofile(os.path.join(roach_root,datafile))
        dram_file = '/proc/%d/hw/ioreg/dram_memory' % self.bof_pid
        datafile = '/' + datafile
        result = borph_utils.check_output(('ssh root@%s "dd seek=%d if=%s 
of=%s"' % (self.roachip,offset_blocks,datafile,dram_file)),shell=True)
        print result
        self._unpause_dram()

This seems to work pretty well.

Glenn


On Tue, Apr 8, 2014 at 4:51 PM, Madden, Timothy J. 
<[email protected]<mailto:[email protected]>> wrote:


I am using a DRAM block on a ROACH 1.

I am using python to write data to the dram with corr.

I create a binary array of zeros like:

fa.lut_binaryIQ = '\x00\x00\x00\x00\x00\x00.....'

Length of the array is 1048576.

If I do
roach.write('dram_memory',fa.lut_binaryIQ)

It works fine.

If I double the length of the binary array, where len(fa.lut_binaryIQ)=2097152

Then I do
roach.write('dram_memory',fa.lut_binaryIQ)

I get a timeout error
RuntimeError: Request write timed out after 20 seconds.


I have tried longer and longer timeouts of 60sec and still no good result. I 
set the timeout with:
roach = corr.katcp_wrapper.FpgaClient('192.168.0.67', 7147,timeout=60)


Any ideas? It seems there is a 1MB length limit on my dram.

Tim




Reply via email to