Re: [lwip-users] LwIP RAW + Zynq - Unresponsive Tx path when Rx is active

2018-08-14 Thread Nenad Pekez

You said you use RAW API,
and I remember answering some question of yours, but I don't remember if
you are running bare metal or an OS (neither should I).


Yes, I am running bare metal.


You should (read: must) be calling sys_check_timeouts() frequently
enough for TCP to handle its internal timers;
Another source of errors is people (and vendors) not reading the docs
and calling lwIP low-level functions from different contexts (like main
loop and interrupts on a bare metal system).


I have discussed with Simon this stuff in details in previous thread: 
http://lists.nongnu.org/archive/html/lwip-users/2018-07/msg5.html


I do check TCP timeouts all the time and I am sure nothing is called 
from interrupt context.



I guess you can do with the built-in
handler, that is, no tcp_sent() callback. Do you have one ?


I have added tcp_sent callback. Behavior looking from Wireshark is 
somewhat different, but Tx path still hangs. I have attached some 
Wireshark captures at this link: https://files.fm/u/2mhggcav.



So, instead of your application,
use a known to work good application (read: written by someone with some
expertise on the subject, that is: the example apps in the tree or in
the contrib tree).


Well, the problem is that I cannot find appropriate third party 
application which is doing sending and receiving at the same time. I 
have checked iperf application provided in lwip src, but there the 
sending is done after the receiving is finished. Whatsoever, I did use 
this application as reference for implementing my application. I was 
also checking some Xilinx iperf examples.



Since you are debugging a heavy sender


Is this really a heavy sender? It's just 1.5MB per second on a 1Gb 
Ethernet.



the web server might be a better
choice.


I cannot find this application in lwip src. Maybe you can give me some 
references?



You can also use a bare minimum app that "just sends"


When I do "just sending" or "just receiving" the problem does not exist. 
I would send you my code on how the sending is done. But basically I 
just write to TCP buffer from time to time in the main loop and have a 
counter in tcp_sent callback counting how much data has been 
acknowledged. Nothing more than that. Maybe I should continue sending 
from tcp_sent callback?



And sometimes caches... what is it that you have there ?


Wow, caches are story for itself. I still need to check some stuff with 
caches. Will report on this one.


Sergio, thank you very much for your answers and ideas.

Best regards,
Nenad
___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

[lwip-users] LwIP RAW + Zynq - Unresponsive Tx path when Rx is active

2018-08-10 Thread Nenad Pekez

Hello,

I hope to get some advice on how to debug a problem. I am still 
investigating it and my knowledge on Ethernet drivers and lwIP is still 
limited.


We have 2 lwIP (2.0.2) RAW servers on a Zynq based board. Each server 
can serve only 1 client at a time. Over port 4040 server only receives 
and over 4041 it only sends data. Everything works fine if we don't 
receive and send simultaneously. Communication over 4041 (Tx port) 
stalls at some point. We send and receive at data rates of around 1MB/s, 
but over 1Gb network.


Looking at Wireshark (121 is PC, 124 is Zynq) we see a lot of Dup ACKs 
before communication stalls. But once it stalls, no traffic is captured 
at all. For whole minute nothing is captured, until we close our PC app.


Looking at Lwip memory stats on Zynq nothing unusual is observed. What 
is strange though is that when communication stalls, LINK STAT xmit 
value is being updated (increased), however TCP STAT xmit is not being 
updated at all. Therefore, TCP send buffers are not being emptied and we 
are not able to write anything at all. We call tcp_output when 
tcp_sndbuf(pcb) < TCP_MSS/2, where TCP_MSS is 1460. We don't use tcp 
sent callback at all (is that a problem?), just periodically write to 
tcp sndbuf. Other side of communication is alive all the time without 
problems.


Any ideas on what this situation with LINK and TCP xmit stats mean? Any 
ideas on how to debug this problem further?


Thank you very much,
Nenad

___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

Re: [lwip-users] LwIP RAW - Simultaneous (full-duplex) communication

2018-07-06 Thread Nenad Pekez
When I added copying of pbuf data to application buffer inside TCP 
receive callback, one interesting thing happened - TX side got 
completely unresponsive!


I have been copying it like this:
if(p->tot_len < BUF_SIZE) {
    if (pbuf_copy_partial(p, (void*)buf, p->tot_len, 0) != p->tot_len) {
            xil_printf("total copied != p->tot_len\n");
        }
    }

I guess that using pbuf_copy_partial is a valid way to do the copying to 
application buffer, isn't it? This seems like some memory corruption.




___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

Re: [lwip-users] LwIP RAW - Simultaneous (full-duplex) communication

2018-07-06 Thread Nenad Pekez

You'll have to try and debug it yourself.


For now, I started counting MAC send and receive handlers and it's 
obvious that when both connections are active send handlers are 
completely dominant.


At one point there is:
tx_handler: 3759422
rx_handler: 864940

I will check if there are any priorities for these handlers.


E.g. set a breakpoint at when the stats underflow happens


Not really sure how this could be done nor where the breakpoint could be 
set (especially because I cannot debug library and debug sources in 
Xilinx SDK).





___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users


Re: [lwip-users] LwIP RAW - Simultaneous (full-duplex) communication

2018-07-05 Thread Nenad Pekez

And from which context are timers processed?


Xilinx timers are used. Each 250ms timer callback is called inside which 
TcpFastTmrFlag (for 250ms) and/or TcpSlowTmrFlag (for 500ms) variables 
are set to 1. In the main loop, there is this:


    while (1) {
        if (TcpFastTmrFlag) {
            tcp_fasttmr();
            TcpFastTmrFlag = 0;
        }
        if (TcpSlowTmrFlag) {
            tcp_slowtmr();
            TcpSlowTmrFlag = 0;
        }
        xemacif_input(echo_netif);
        transfer_txperf_data();
    }

Inside of timer callbacks, there is some SW workaround for HW bug of 
Ethernet controller:

    /* For providing an SW alternative for the SI #692601. Under heavy
     * Rx traffic if at some point the Rx path becomes unresponsive, the
     * following API call will ensures a SW reset of the Rx path. The
     * API xemacpsif_resetrx_on_no_rxdata is called every 100 milliseconds.
     * This ensures that if the above HW bug is hit, in the worst case,
     * the Rx path cannot become unresponsive for more than 100
     * milliseconds.
     */
#ifndef USE_SOFTETH_ON_ZYNQ
    if (ResetRxCntr >= RESET_RX_CNTR_LIMIT) {
        xemacpsif_resetrx_on_no_rxdata(echo_netif);
        ResetRxCntr = 0;
    }
#endif

I don't think any calls to lwIP are made from an ISR.



___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

Re: [lwip-users] LwIP RAW - Simultaneous (full-duplex) communication

2018-07-05 Thread Nenad Pekez

Have you read this:

http://www.nongnu.org/lwip/2_0_x/pitfalls.html
I took a look at this. This has been provided by Xilinx drivers already. 
In their MAC receive handler they allocate memory for pbuf (call to 
pbuf_realloc() actually) and enqueue it. In the main loop, I call Xilinx 
function xemacif_input() which check if there is anything in the queue 
and if there is netif->input() is called as described in the example.


What I also noticed, when Zynq->PC side becomes active, in the TCP recv 
callback I get p->tot_len much larger than 1446, and from time to time 
there is also a print "Receive over run" from MAC error handler.


More I look at this, it seems as this is another problem in Xilinx drivers.

___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users


Re: [lwip-users] LwIP RAW - Simultaneous (full-duplex) communication

2018-07-05 Thread Nenad Pekez

Hi Simon,

But it seems irrelevant? 

Yes, it's irrelevant.

Have you checked the stats if you run out of memory somewhere? 
I have checked them, and there is certainly something wrong. After some 
time, stats for memory seem to overflow, at least that's my impression.


At one point I have this:
MEM HEAP
    avail: 6
    used: 13C40
    max: 14340
    err: 0

And next print (after 1 second):
 MEM HEAP
    avail: 6
    used: FFFC6480
    max: FFFC6A80
    err: 5

Sometimes, there is no error at all, just "used" and "max" seem to 
overflow or something.


Also, I noticed that no matter which connection is made first, side 
Zynq->PC is the dominant one. I even tried not to call tcp_output() at 
all, but behavior is the same.


When only PC->Zynq side is active, memory stats don't change at all ( 
used: 240, max: 440), even though high throughput is achieved 
(>900Mb/s). Only when Zynq->PC side is active memory consumption 
increases considerably.


Actually, I'm too lazy to check out myself what the images contain. 
But not too lazy to write out that many sentences. :) A picture is worth 
a thousand words. I understand your point though :)


Best regards and thanks for helping me out,
Nenad



___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

Re: [lwip-users] LwIP RAW - Simultaneous (full-duplex) communication

2018-07-04 Thread Nenad Pekez

Hi,

quick update: iperf client on PC was trying to connect on the same 5001 
port, thus the mentioned error was reported.


Now, there is a full-duplex communication, however, the one which is 
first achieved is completely dominant. Once the first connection is 
closed, the other one achieves higher throughput. Look at provided images.


Best regards,
Nenad


___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

[lwip-users] LwIP RAW - Simultaneous (full-duplex) communication

2018-07-04 Thread Nenad Pekez

Hi guys,

if there is a LwIP TCP server that has 2 connected clients (therefore 2 
PCBs: lets say PCB A and PCB B) and tcp_write() + tcp_output() functions 
are performed on PCB A, but at the same time server is processing 
received packets on PCB B, is any conflict going to happen? Is this a 
valid sequence of events?


What I have tried so far is this:

[iperf server:5001@PC Windows 7] <-- [iperf 
client:@Zynq+LwIP 2.0.2]


[iperf client:@PC Windows 7]>[iperf 
server:5002@Zynq+LwIP 2.0.2]


in other words, there is an iperf client on Zynq board and it gets 
connected to iperf server running at PC - data is being sent from Zynq 
to PC and everything is working fine. As soon as PC iperf client gets 
connected to iperf server (while other connection is still alive) both 
connections crash and error -11 is returned from tcp_write ("not 
connected").


All in all, I guess that this kind of communication is possible, but 
there is some stuff that should be extra handled? Are there any examples 
or references on how full-duplex communication could be achieved with 
LwIP RAW?


Best regards,
Nenad

#ifndef __LWIPOPTS_H_
#define __LWIPOPTS_H_

#ifndef PROCESSOR_LITTLE_ENDIAN
#define PROCESSOR_LITTLE_ENDIAN
#endif

#define SYS_LIGHTWEIGHT_PROT 1

#define NO_SYS 1
#define LWIP_SOCKET 0
#define LWIP_COMPAT_SOCKETS 0
#define LWIP_NETCONN 0

#define NO_SYS_NO_TIMERS 1

#define LWIP_TCP_KEEPALIVE 0

#define MEM_ALIGNMENT 64
#define MEM_SIZE 393216
#define MEMP_NUM_PBUF 32
#define MEMP_NUM_UDP_PCB 4
#define MEMP_NUM_TCP_PCB 32
#define MEMP_NUM_TCP_PCB_LISTEN 8
#define MEMP_NUM_TCP_SEG 2048
#define MEMP_NUM_SYS_TIMEOUT 8
#define MEMP_NUM_NETBUF 8
#define MEMP_NUM_NETCONN 16
#define MEMP_NUM_TCPIP_MSG_API 16
#define MEMP_NUM_TCPIP_MSG_INPKT 64

#define MEMP_NUM_SYS_TIMEOUT 8
#define PBUF_POOL_SIZE 256
#define PBUF_POOL_BUFSIZE 1700
#define PBUF_LINK_HLEN 16

#define ARP_TABLE_SIZE 10
#define ARP_QUEUEING 1

#define ICMP_TTL 255

#define IP_OPTIONS 0
#define IP_FORWARD 0
#define IP_REASSEMBLY 1
#define IP_FRAG 1
#define IP_REASS_MAX_PBUFS 128
#define IP_FRAG_MAX_MTU 1500
#define IP_DEFAULT_TTL 255
#define LWIP_CHKSUM_ALGORITHM 3

#define LWIP_UDP 1
#define UDP_TTL 255

#define LWIP_TCP 1
#define TCP_MSS 1460
#define TCP_SND_BUF 64240
#define TCP_WND 64240
#define TCP_TTL 255
#define TCP_MAXRTX 12
#define TCP_SYNMAXRTX 4
#define TCP_QUEUE_OOSEQ 1
#define TCP_SND_QUEUELEN   16 * TCP_SND_BUF/TCP_MSS
#define CHECKSUM_GEN_TCP0
#define CHECKSUM_GEN_UDP0
#define CHECKSUM_GEN_IP 0
#define CHECKSUM_CHECK_TCP  0
#define CHECKSUM_CHECK_UDP  0
#define CHECKSUM_CHECK_IP   0
#define LWIP_FULL_CSUM_OFFLOAD_RX  1
#define LWIP_FULL_CSUM_OFFLOAD_TX  1

#define MEMP_SEPARATE_POOLS 1
#define MEMP_NUM_FRAG_PBUF 256
#define IP_OPTIONS_ALLOWED 0
#define TCP_OVERSIZE TCP_MSS

#define LWIP_DHCP 1
#define DHCP_DOES_ARP_CHECK 1

#define CONFIG_LINKSPEED_AUTODETECT 1

#define LWIP_STATS 1
#define LWIP_STATS_DISPLAY 1

#define DHCP_DEBUG (LWIP_DBG_LEVEL_SEVERE | LWIP_DBG_ON)
#endif
___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

Re: [lwip-users] lwIP RAW - Low throughput during first few seconds

2018-06-29 Thread Nenad Pekez

This is a Xilinx driver, right? Have you read this (also Xilinx):

http://lists.nongnu.org/archive/html/lwip-devel/2017-12/msg00070.html


It seems that they have fixed problem mentioned in this thread in the 
newest version of port for lwip, specifically in file xemacpsif_dma.c. 
However, I was using older version of this file, and it seems there was 
a problem all the time.


Eveything - throughput + connections - seem OK now when I switched to 
latest version of this file.


Thank you guys,
Nenad



___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users


Re: [lwip-users] lwIP RAW - Low throughput during first few seconds

2018-06-29 Thread Nenad Pekez

Hi Sergio,


if you are using NO_SYS=1

yes, I am using NO_SYS=1.


there must be a main loop calling sys_check_timeouts()

my main loop (provided by Xilinx XAPP1026):

    /* receive and process packets */
    while (1) {
        if (TcpFastTmrFlag) {
            tcp_fasttmr();
            TcpFastTmrFlag = 0;
        }
        if (TcpSlowTmrFlag) {
            tcp_slowtmr();
            TcpSlowTmrFlag = 0;
        }
        xemacif_input(netif);
    }

sys_check_timeouts() cannot be used because LWIP_TIMERS are not used. 
Zynq timers are used instead.



not polling the receiver frequently enough

I am not using poll callback if that has to do something with this.


You should check if your frames carrying your TCP SYNs are being
delivered or get lost in the driver.
If you have time, please take a look at description of the connection 
timeout problem I posted at Xilinx forum: 
https://forums.xilinx.com/t5/Embedded-Processor-System-Design/Zynq-7000-LwIP-2-0-2-Connection-timeout/td-p/869319


There I have described what is happening with TCP_SYN packets (I looked 
at Lwip TCP and IP debug prints). All in all, Zynq gets the SYN packet 
which is forwarded to IP and TCP layers. Response is carried out , i.e. 
"ip4_output_if: call netif->output()" is printed, however no message is 
actually sent out back to PC (I verified this with Wireshark). This 
seems like drivers issue, doesn't it?


Thank you for your response,
I am still going deeper into this problem

Nenad



___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

Re: [lwip-users] lwIP RAW - Low throughput during first few seconds

2018-06-29 Thread Nenad Pekez

It turns out I made conclusion too soon...

The 5 seconds problem still exists but is quite rare (1 in 20 iperf 
client executions). Also, new problem is introduced - sometimes I also 
get connection timeout. I forgot to mention that Ethernet (emacps) 
Xilinx drivers were also updated to latest version (when we updated lwip 
to 2.0.2), and it is noticed that in combination lwip2.0.2. with older 
emacps version this connection fails are much often (it is not possible 
to make 2 consecutive connections). It seems this is a problem with 
Xilinx drivers, so it doesn't concern the lwip 2.0.2.  I will try to see 
what is happening inside those drivers and eventually write to Xilinx 
support.


Thank you anyways,
Nenad

___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

Re: [lwip-users] lwIP RAW - Low throughput during first few seconds

2018-06-29 Thread Nenad Pekez

Hi,

it seems like the problem is solved. We simply decided to switch to lwip 
2.0.2 version with port provided by Xilinx SDK 2018.2.


Throughput seems much more stable now (check 
iperf_throughput_iperf202.png).


Best regards,
Nenad
___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

[lwip-users] lwIP RAW - Low throughput during first few seconds

2018-06-26 Thread Nenad Pekez

Hello everyone,

we have been measuring throughput between PC Windows 7 and Zynq device 
using iperf applications. For Windows 7, we use official iperf client 
application, while for our Zynq based board we have downloaded iperf 
lwIP RAW server implementation from Zynq manufacturer - Xilinx. We send 
data from PC to Zynq.


As expected, throughput is quite high, almost 1Gb, but only after first 
4 to 5 seconds. During this few first seconds, data transfer is almost 
non-existent. Does anyone have idea what could be the cause of this? Is 
there any way to avoid this and have active data transfers all the time 
including this first few seconds?


I was thinking that problem could be to iperf client, however, even with 
our basic client implementation, the behavior is the same.


In attachments I provide:

 * iperf_throughput.png - screenshot of iperf measurements,
 * lwipopts.h - our lwIP configuration file and
 * rxperf.c - lwIP server implementation file.

If any additional information is needed, let me know.

Best regards,
Nenad


#ifndef __LWIPOPTS_H_
#define __LWIPOPTS_H_

#ifndef PROCESSOR_LITTLE_ENDIAN
#define PROCESSOR_LITTLE_ENDIAN
#endif

#define SYS_LIGHTWEIGHT_PROT 1

#define NO_SYS 1
#define LWIP_SOCKET 0
#define LWIP_COMPAT_SOCKETS 0
#define LWIP_NETCONN 0

#define NO_SYS_NO_TIMERS 1

#define LWIP_TCP_KEEPALIVE 0

#define MEM_ALIGNMENT 64
#define MEM_SIZE 393216
#define MEMP_NUM_PBUF 16
#define MEMP_NUM_UDP_PCB 4
#define MEMP_NUM_TCP_PCB 32
#define MEMP_NUM_TCP_PCB_LISTEN 8
#define MEMP_NUM_TCP_SEG 1024
#define MEMP_NUM_SYS_TIMEOUT 8
#define MEMP_NUM_NETBUF 8
#define MEMP_NUM_NETCONN 16
#define MEMP_NUM_TCPIP_MSG_API 16
#define MEMP_NUM_TCPIP_MSG_INPKT 64

#define MEMP_NUM_SYS_TIMEOUT 8
#define PBUF_POOL_SIZE 256
#define PBUF_POOL_BUFSIZE 1700
#define PBUF_LINK_HLEN 16

#define ARP_TABLE_SIZE 2
#define ARP_QUEUEING 0

#define ICMP_TTL 255

#define IP_OPTIONS 0
#define IP_FORWARD 0
#define IP_REASSEMBLY 1
#define IP_FRAG 1
#define IP_REASS_MAX_PBUFS 128
#define IP_FRAG_MAX_MTU 1500
#define IP_DEFAULT_TTL 255
#define LWIP_CHKSUM_ALGORITHM 3

#define LWIP_UDP 1
#define UDP_TTL 255

#define LWIP_TCP 1
#define TCP_MSS 1460
#define TCP_SND_BUF 32120
#define TCP_WND 64240
#define TCP_TTL 255
#define TCP_MAXRTX 12
#define TCP_SYNMAXRTX 4
#define TCP_QUEUE_OOSEQ 1
#define TCP_SND_QUEUELEN   16 * TCP_SND_BUF/TCP_MSS
#define CHECKSUM_GEN_TCP0
#define CHECKSUM_GEN_UDP0
#define CHECKSUM_GEN_IP 0
#define CHECKSUM_CHECK_TCP  0
#define CHECKSUM_CHECK_UDP  0
#define CHECKSUM_CHECK_IP   0
#define LWIP_FULL_CSUM_OFFLOAD_RX  1
#define LWIP_FULL_CSUM_OFFLOAD_TX  1

#define MEMP_SEPARATE_POOLS 1
#define MEMP_NUM_FRAG_PBUF 256
#define IP_OPTIONS_ALLOWED 0
#define TCP_OVERSIZE TCP_MSS

#define LWIP_DHCP 1
#define DHCP_DOES_ARP_CHECK 1

#define CONFIG_LINKSPEED_AUTODETECT 1

#define LWIP_NETIF_HOSTNAME 1
#endif
/*
 * Copyright (c) 2007 Xilinx, Inc.  All rights reserved.
 *
 * Xilinx, Inc.
 * XILINX IS PROVIDING THIS DESIGN, CODE, OR INFORMATION "AS IS" AS A
 * COURTESY TO YOU.  BY PROVIDING THIS DESIGN, CODE, OR INFORMATION AS
 * ONE POSSIBLE   IMPLEMENTATION OF THIS FEATURE, APPLICATION OR
 * STANDARD, XILINX IS MAKING NO REPRESENTATION THAT THIS IMPLEMENTATION
 * IS FREE FROM ANY CLAIMS OF INFRINGEMENT, AND YOU ARE RESPONSIBLE
 * FOR OBTAINING ANY RIGHTS YOU MAY REQUIRE FOR YOUR IMPLEMENTATION.
 * XILINX EXPRESSLY DISCLAIMS ANY WARRANTY WHATSOEVER WITH RESPECT TO
 * THE ADEQUACY OF THE IMPLEMENTATION, INCLUDING BUT NOT LIMITED TO
 * ANY WARRANTIES OR REPRESENTATIONS THAT THIS IMPLEMENTATION IS FREE
 * FROM CLAIMS OF INFRINGEMENT, IMPLIED WARRANTIES OF MERCHANTABILITY
 * AND FITNESS FOR A PARTICULAR PURPOSE.
 *
 */

#include 
#include 

#include "lwip/err.h"
#include "lwip/tcp.h"
#ifdef __arm__
#include "xil_printf.h"
#endif

static unsigned rxperf_port = 5001; /* iperf default port */
static unsigned rxperf_server_running = 0;
static unsigned txperf_server_running = 0;

extern struct tcp_pcb *connected_pcb;

void print_rxperf_app_header();

err_t
txperf_sent_callback(void *arg, struct tcp_pcb *tpcb, u16_t len);

int
transfer_rxperf_data() {
return 0;
}

static err_t
rxperf_recv_callback(void *arg, struct tcp_pcb *tpcb, struct pbuf *p, err_t err)
{
/* close socket if the peer has sent the FIN packet  */
if (p == NULL) {
xil_printf("Connection closed\n");
tcp_close(tpcb);
return ERR_OK;
}

/* all we do is say we've received the packet */
/* we don't actually make use of it */
tcp_recved(tpcb, p->tot_len);

//xil_printf("p tot len: %d\n", p->tot_len);

pbuf_free(p);
return ERR_OK;
}

err_t
rxperf_accept_callback(void *arg, struct tcp_pcb *newpcb, err_t err)
{
xil_printf("rxperf: Connection Accepted\r\n");

tcp_recv(newpcb, rxperf_recv_callback);

return ERR_OK;
}

int
start_rxperf_application()
{
struct tcp_pcb *pcb;
err_t err;

/* create new TCP PCB structure */
pcb = tcp_new();
if (!pcb) {

Re: [lwip-users] ?==?utf-8?q? UDP-based reliable bulk data transfer

2017-04-25 Thread Nenad Pekez

Hi Patrick,

did you actually port one of these 2 proposed protocols to your embedded system 
or implemented your own UDP reliable protocol? Also, did you get bitrates close 
to 100Mb?

Regards,
Nenad

 Original Message 
Subject: Re: [lwip-users] UDP-based reliable bulk data transfer
Date: Tuesday, April 25, 2017 23:52 CEST
From: Patrick Klos 
Reply-To: Mailing list for lwIP users 
To: lwip-users@nongnu.org
References: <1493104589027-29424.p...@n7.nabble.com>


 On 4/25/2017 3:16 AM, pupkin wrote:
> Greetings,
>
> I need to send large quantities of data reliably from a Zynq-based device to
> a PC.
> I'm looking for a data transfer protocol which can get as close as possible
> to wire speed on a dedicated 1Gb ethernet link. It would be nice although
> not strictly required it it worked over arbitrary internet connections.
>
> Currently I'm considering either porting http://enet.bespin.org/ or
> http://udt.sourceforge.net/ to lwIp or rolling my own UDP-based protocol. I
> will need support for zero-copy with scatter-gather DMA.
>
> Has anyone done this before? Any suggestions?
>
> Regards
> VP

For what it's worth, I've done some tests on a dedicated 100Mb link and
they are quite reliable. I suspect the same could be said about a
dedicated 1Gb link? I ran a test once on a dedicated link between an
embedded system and a Windows computer with no lost packets for hours
(maybe even overnight - I forget the exact details).

That means that you should be able to implement a high speed UDP based
transfer protocol with very little overhead to make it "reliable". I
would bet that creating a "window" of UDP packets (with sequence
numbers) will do what you want with the receiver sending back an ACK of
sorts for every so many packets or if it sees a sequence number missing.

I'm not sure about the zero-copy scatter-gather support - you'll
probably just have to try it. The lwIP stack might scan your pbuf and
decide to copy the packet to a single buffer pbuf depending on the
capabilities of the ethernet interface. Even so, your Zynq processor
will probably be able to handle the copies faster than a 1Gb link could
transfer the data?

Good luck!

Patrick Klos
Klos Technologies, Inc.


___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users
 
___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

Re: [lwip-users] ?==?utf-8?q? UDP-based reliable bulk data transfer

2017-04-25 Thread Nenad Pekez

Hi pupkin,

I am actually should do the same thing with the same SOC. And I am still 
considering different options. Have you tried to actually use any of these two 
protocols you posted (enet and udt)? I am really interested in how well they 
work and how close they get to 1Gb speed. And whether they can do the better 
job than TCP after all... 

Regards,
Nenad

 Original Message 
Subject: [lwip-users] UDP-based reliable bulk data transfer
Date: Tuesday, April 25, 2017 09:16 CEST
From: pupkin 
Reply-To: Mailing list for lwIP users 
To: lwip-users@nongnu.org


 Greetings,

I need to send large quantities of data reliably from a Zynq-based device to
a PC.
I'm looking for a data transfer protocol which can get as close as possible
to wire speed on a dedicated 1Gb ethernet link. It would be nice although
not strictly required it it worked over arbitrary internet connections.

Currently I'm considering either porting http://enet.bespin.org/ or
http://udt.sourceforge.net/ to lwIp or rolling my own UDP-based protocol. I
will need support for zero-copy with scatter-gather DMA.

Has anyone done this before? Any suggestions?

Regards
VP




--
View this message in context: 
http://lwip.100.n7.nabble.com/UDP-based-reliable-bulk-data-transfer-tp29424.html
Sent from the lwip-users mailing list archive at Nabble.com.

___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users
 
___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

Re: [lwip-users] lwIP with FreeRTOS memory problem

2016-11-28 Thread Nenad Pekez

Richard, first of all, thanks for your quick response.

What I forgot to mention is that application doesn't crash at the very 
beginning, it actually works for some time how it should work and than 
suddenly crashes. I suppose that problem you mention would manifest 
itself right from the beginning, when XEmacPs_IntrHandler is called for 
the first time? Moreover, I thought that lwIP doesn't use standard C 
malloc() and free() functions (if you are talking about those), but have 
custom heap-based function mem_malloc(), UNLESS we set option 
MEM_LIBC_MALLOC to 1 , which I haven't done.


Regards,
Nenad

On 11/28/2016 4:16 PM, FreeRTOS Info wrote:


On 28/11/2016 02:17, pekez wrote:

Hello people,

I've been trying to figure out how lwIP uses memory and it's really
important to know exact (or maximum) amount of memory that lwIP uses. I
am using lwIP v1.4.1 with FreeRTOS v8.2.3 on ZYNQ custom board. Right
now, I am working with application examples provided by Xilinx (XAPP1026
). 


On ZYNQ board, I am running iperf 
server, and on PC iperf client.

When both MEMP_MEM_MALLOC and MEM_LIBC_MALLOC are 0 then everything
works completely expected. However, when I set MEMP_MEM_MALLOC to 1
(according to RAM usage article from lwIP wiki
), so that every piece of
dynamically allocated memory comes from heap of MEM_SIZE size,
application always crashes, no matter how big or small MEM_SIZE is (I
have a lot of RAM, so I tried to put MEM_SIZE to even more then 20 MB!).
Application crashes because two of FreeRTOS asserts fail, I get these
two prints: "Assert failed in file queue.c, line 1224" and "Assert
failed in file port.c, line 424". According to call trace (which I
provided in attachments), it crashes when XEmacPs_IntrHandler (I am
using Xilinx layer 2 drivers) tries to allocate memory for pbuf.

Is this problem even related to lwIP or maybe to FreeRTOS or Xilinx
drivers, what do you think? I am also providing the entire lwipopts.h,
so that you can see whether some other options need to be changed in
order to make application work with MEMP_MEM_MALLOC set to 1.


It sounds like you are trying to call malloc from an interrupt handler 
- which you can't do [in FreeRTOS] as the malloc is thread safe but 
not interrupt safe. One solution, albeit with a time penalty, is to 
use a deferred interrupt handler, where the IRQ does nothing more then 
unblock a very high priority task and clear the interrupt. Then the 
interrupt will return directly to the unblocked task to perform the 
actual processing necessary (the processing that was deferred from the 
interrupt). You can then optimise it so only interrupts that want to 
allocate memory get deferred to a task.


Example code snippets of how to do this can be found on the following 
page: 
http://www.freertos.org/RTOS_Task_Notification_As_Counting_Semaphore.html


Alternatively provide an additional heap for use just by the ISR so 
you don't get re-entrancy problems, or update the heap_n.c file 
(http://www.freertos.org/a00111.html) so it uses interrupt safe 
critical sections rather than task only critical sections (although 
this last option could make interrupts less responsive).


Regards,
Richard.

+ http://www.FreeRTOS.org
The de facto standard, downloaded every 4.2 minutes during 2015.

+ http://www.FreeRTOS.org/plus
IoT, Trace, Certification, TCP/IP, FAT FS, Training, and more...

___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users




---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users