As per previous email: Either start it again, or upgrade it regards
marc On Fri, Aug 10, 2018 at 12:45 PM, Concu, Raimondo <[email protected]> wrote: > Hi Mark, > > Hi everyone, > > maybe you're right > > when the problem happens > > and i restart tcpborphserver3, the problem disappears. > > and this is the output: > > root@192:~# ps -ALL | grep tcp > 820 820 ? 00:00:53 tcpborphserver3 > root@192:~# kill 820 > root@192:~# > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > roach VMA close > roach release mem called > > > > Any suggestions? > > Thanks in advance > Raimondo > > > 2018-08-10 13:46 GMT+02:00 Marc Welz <[email protected]>: >> >> Hello >> >> >> The "raw unable\_to\_map\_file\_/dev/roach/mem:\_Cannot\_allocate\_memory" >> is the interesting line. I suspect you are running a slightly older >> version of the filesystem or romfs which had a bug where it didn't >> unmap the previous image, and so suffered address space exhaustion >> after a dozen or so programs. The quick way to solve this is to reboot >> occasionally, the long term solution is to program a newer version of >> the romfs (in particular tcpborphserver) >> >> regards >> >> marc >> >> >> On Fri, Aug 10, 2018 at 10:17 AM, Concu, Raimondo >> <[email protected]> wrote: >> > Hi Everyone, >> > >> > I have a problem when programming the board after a certain number of >> > times >> > (30 - 60), >> > >> > as you can see in the following log: >> > >> > DEBUG:katcp:Starting thread Thread-1 >> > 2018-08-10 11:04:15,203--client--DEBUG: Starting thread Thread-1 >> > DEBUG:katcp:#version alpha-6-g0b8dd54 >> > 2018-08-10 11:04:15,206--client--DEBUG: #version alpha-6-g0b8dd54 >> > DEBUG:katcp:#build-state 2012-10-24T10:04:56 >> > 2018-08-10 11:04:15,206--client--DEBUG: #build-state 2012-10-24T10:04:56 >> > DEBUG:katcp:#version-connect katcp-library alpha-6-g0b8dd54 >> > 2012-10-24T10:04:56 >> > 2018-08-10 11:04:15,206--client--DEBUG: #version-connect katcp-library >> > alpha-6-g0b8dd54 2012-10-24T10:04:56 >> > DEBUG:katcp:#version-connect katcp-protocol 4.9-M >> > 2018-08-10 11:04:15,207--client--DEBUG: #version-connect katcp-protocol >> > 4.9-M >> > DEBUG:katcp:#version-connect kernel 3.4.0-rc3+ >> > #14\_Tue\_May\_29\_17:05:02\_SAST\_2012 >> > 2018-08-10 11:04:15,207--client--DEBUG: #version-connect kernel >> > 3.4.0-rc3+ >> > #14\_Tue\_May\_29\_17:05:02\_SAST\_2012 >> > DEBUG:katcp:?progdev 1024.bof >> > >> > 2018-08-10 11:04:16,206--client--DEBUG: ?progdev 1024.bof >> > >> > DEBUG:katcp:#log info 1045321281687 raw >> > attempting\_to\_program\_1024.bof >> > 2018-08-10 11:04:16,646--client--DEBUG: #log info 1045321281687 raw >> > attempting\_to\_program\_1024.bof >> > DEBUG:katcp:#log info 1045321281695 raw >> > >> > attempting\_to\_program\_bitstream\_of\_19586188\_bytes\_to\_device\_/dev/roach/config >> > 2018-08-10 11:04:16,646--client--DEBUG: #log info 1045321281695 raw >> > >> > attempting\_to\_program\_bitstream\_of\_19586188\_bytes\_to\_device\_/dev/roach/config >> > DEBUG:katcp:#fpga loaded >> > 2018-08-10 11:04:16,647--client--DEBUG: #fpga loaded >> > DEBUG:katcp:#log error 1045321282122 raw >> > unable\_to\_map\_file\_/dev/roach/mem:\_Cannot\_allocate\_memory >> > 2018-08-10 11:04:16,647--client--DEBUG: #log error 1045321282122 raw >> > unable\_to\_map\_file\_/dev/roach/mem:\_Cannot\_allocate\_memory >> > DEBUG:katcp:#log error 1045321282125 raw >> > unable\_to\_program\_bit\_stream\_from\_1024.bof >> > 2018-08-10 11:04:16,647--client--DEBUG: #log error 1045321282125 raw >> > unable\_to\_program\_bit\_stream\_from\_1024.bof >> > DEBUG:katcp:!progdev fail >> > 2018-08-10 11:04:16,648--client--DEBUG: !progdev fail >> > ERROR:katcp:Request progdev failed. >> > Request: ?progdev 1024.bof >> > Reply: !progdev fail. >> > 2018-08-10 11:04:16,648--katcp_wrapper--ERROR: Request progdev failed. >> > Request: ?progdev 1024.bof >> > Reply: !progdev fail. >> > Traceback (most recent call last): >> > File "sardara_1024.py", line 50, in <module> >> > fpga.progdev(boffile) >> > File "/usr/local/lib/python2.7/dist-packages/corr/katcp_wrapper.py", >> > line >> > 243, in progdev >> > reply, informs = self._request("progdev", self._timeout, >> > parser.got_reply(boffile)) >> > File "/usr/local/lib/python2.7/dist-packages/corr/katcp_wrapper.py", >> > line >> > 198, in _request >> > % (request.name, request, reply)) >> > RuntimeError: Request progdev failed. >> > Request: ?progdev 1024.bof >> > Reply: !progdev fail. >> > >> > >> > >> > What is the problem? >> > >> > Thank to Everyone. >> > >> > Raimondo >> > >> > >> > -- >> > You received this message because you are subscribed to the Google >> > Groups >> > "[email protected]" group. >> > To unsubscribe from this group and stop receiving emails from it, send >> > an >> > email to [email protected]. >> > To post to this group, send email to [email protected]. >> >> -- >> You received this message because you are subscribed to the Google Groups >> "[email protected]" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> To post to this group, send email to [email protected]. > > > -- > You received this message because you are subscribed to the Google Groups > "[email protected]" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. -- You received this message because you are subscribed to the Google Groups "[email protected]" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected].

