[vpp-dev] make in debian 8

2017-05-23 Thread emma sdi
Dear VPP folks, I build vpp in debian 8, with a little changes in makefile. Do you want this kind of commits?! Regards, khers ___ vpp-dev mailing list vpp-dev@lists.fd.io https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Fragmented IP and ACL

2018-05-08 Thread emma sdi
Dear vpp folks I have a simple topology and a permit+reflect rule for udp on destination port 1000 as pasted in this link. I send a big file from 172.20.1.2 to 172.20.1.1 port 1001 with nc and I receive some packets (non first fragment) in second client

Re: [vpp-dev] Fragmented IP and ACL

2018-05-08 Thread emma sdi
and it > might be possible to add an option for reassembly then. > > --a > > On 8 May 2018, at 11:02, emma sdi <s3m2e1.6s...@gmail.com> wrote: > > Dear vpp folks > > I have a simple topology and a permit+reflect rule for udp on > destination port 1000 as pasted in this lin

Re: [vpp-dev] anomaly in deleting tcp idle session in vpp

2018-06-12 Thread emma sdi
Dear Andrew, Sorry for taking your time. I made a mistake and I had a miss configuration. the vpp behavior is true in handling of tcp idle sessions. there are no problem.thanks for your help and consideration. Best Regards, On Sun, Jun 10, 2018 at 8:39 PM, emma sdi wrote: > Dear Andrew, >

[vpp-dev] anomaly in deleting tcp idle session in vpp

2018-05-30 Thread emma sdi
Dear Folks, I have a problem with vpp stateful mode. I observed that vpp start to delete tcp idle sessions when session table is full. my question is this behavior is implemented and indeed it is normal routine? or is this an anomaly? because this behavior is not normal generally (for example in

[vpp-dev] anomaly in deleting tcp idle session in vpp

2018-06-02 Thread emma sdi
ble is full it should fifo-reuse the tcp transient sessions, not > the established ones. > > --a > > On 30 May 2018, at 14:00, emma sdi wrote: > > Dear Folks, > I have a problem with vpp stateful mode. I observed that vpp start to > delete tcp idle sessions when sess

Re: [vpp-dev] vl_api_sw_interface_dump problem

2018-09-09 Thread emma sdi
Dear community I have the same problem, and commit this suggestion in https://gerrit.fd.io/r/#/c/14647/. Please someone review this code, it seems OK to me. Cheers, Khers On Mon, Sep 3, 2018 at 12:18 PM sadjad wrote: > Hi Dear VPP > I tried to solve this problem. so i changed device.c in dpdk

Re: [vpp-dev] vl_api_sw_interface_dump problem

2018-09-10 Thread emma sdi
vpp and capture link state events with "show event-logger" after > problem happens? > > -- > Damjan > > On 9 Sep 2018, at 08:50, emma sdi wrote: > > Dear community > > I have the same problem, and commit this suggestion in > https://gerrit.fd.io/r/#/c/14647

Re: [vpp-dev] vl_api_sw_interface_dump problem

2018-09-10 Thread emma sdi
nge LINK_STATE_ELOGS in src/plugins/dpdk/device/init.c to 1, >> recompile vpp and capture link state events with "show event-logger" after >> problem happens? >> >> -- >> Damjan >> >> On 9 Sep 2018, at 08:50, emma sdi wrote: >> >> Dear comm

Re: [vpp-dev] vl_api_sw_interface_dump problem

2018-09-11 Thread emma sdi
:35 PM emma sdi wrote: > I tested your patch, bug still exists. > > On Mon, Sep 10, 2018 at 4:22 PM Damjan Marion wrote: > >> Can you please try this patch: >> >> https://paste.ubuntu.com/p/GtqHd2yWqK/ >> >> -- >> Damjan >> >> On 10 Sep 20

[vpp-dev] recieving SIGSEGV signal by adding ACL

2018-10-16 Thread emma sdi
Dear vpp, I tried to define specific acls and add them to interfaces via vpp_api_test in master branch. When the case was adding interface to an acl, vpp crahed. In this test, I added my vpp_api_test commands in file named as "vat-commands". My startup.conf and "vat-commands"files are as

[vpp-dev] non-initial fragment unexpected drop

2018-10-29 Thread emma sdi
Hi Dear VPP When I was trying to test fragmentation feature in VPP, I encountered a problem. firstly, I added an acl as below: acl_add_replace deny proto 1 sport 2-2 dport 3-3, permit+reflect and then I saw that ICMP ping packets were passing through the VPP matching with second rule. At the

[vpp-dev] API client segmentation fault in VPP 18.10

2018-11-05 Thread emma sdi
Hi Dear VPP I wrote an API client sample application to connect to VPP. Although this sample work with VPP 18.07 without any problem, it has "segmentation fault" with VPP 18.10. it would be appreciated if you help me to fix this problem due to latest changes in VPP 18.10. This sample API client

Re: [vpp-dev] continuous decline in throughput with acl

2018-09-24 Thread emma sdi
t; > On Sun, Sep 23, 2018 at 4:55 PM Andrew  Yourtchenko > wrote: > >> Would you be able to confirm that it changes at a point of >> https://gerrit.fd.io/r/#/c/12770/ ? >> >> --a >> >> On 23 Sep 2018, at 13:31, emma sdi wrote: >> >> Dear C

Re: [vpp-dev] setting speed and duplex in dpdk_update_link_state

2018-09-22 Thread emma sdi
Dear vpp Patch 14848 is verified, Please review and give me some comments. Cheers On Mon, Sep 17, 2018 at 12:10 PM khers wrote: > Dear vpp > > we set speed and duplex In function dpdk_update_link_state, although > When link_status is down, speed and duplex is unknown. > So I commit my

Re: [vpp-dev] continuous decline in throughput with acl

2018-09-24 Thread emma sdi
SEGV with 2 workers, so I repeat the test with one worker. >> Throughput is going down like the latest version. >> >> On Sun, Sep 23, 2018 at 4:55 PM Andrew  Yourtchenko >> wrote: >> >>> Would you be able to confirm that it changes at a point of >>

Re: [vpp-dev] continuous decline in throughput with acl

2018-09-26 Thread emma sdi
I test latest master, throughput slowdown is fixed, Although I see growing memory usage. I will post a new thread and describe how to reproduce the situation. Regards On Tue, Sep 25, 2018 at 3:14 PM emma sdi wrote: > Excellent, I'm going out of office. I will rerun my test with lat

Re: [vpp-dev] continuous decline in throughput with acl

2018-09-25 Thread emma sdi
>> >>>> --a >>>> >>>> On 23 Sep 2018, at 16:42, khers wrote: >>>> >>>> I checked out the version before the gerrit 12770 is merged to master. >>>> 2371c25fed6b2e751163df590bb9d9a93a75a0f >>>> &

Re: [vpp-dev] continuous decline in throughput with acl

2018-09-25 Thread emma sdi
r notable changes with regards to session management - >>>>> but >>>>> maybe worth it to just do hit bisect and see. Should be 4-5 iterations. >>>>> Could you verify that - if indeed this is not seen in 1804. >>>>> >>>>> --a

Re: [vpp-dev] continuous decline in throughput with acl

2018-09-25 Thread emma sdi
on before the gerrit 12770 is merged to master. >>> 2371c25fed6b2e751163df590bb9d9a93a75a0f >>> >>> I got SIGSEGV with 2 workers, so I repeat the test with one worker. >>> Throughput is going down like the latest version. >>> >>> On Sun, Sep 23, 2018 at 4

Re: [vpp-dev] continuous decline in throughput with acl

2018-09-25 Thread emma sdi
er notable changes with regards to session management - >>>>>> but maybe worth it to just do hit bisect and see. Should be 4-5 >>>>>> iterations. >>>>>> Could you verify that - if indeed this is not seen in 1804. >>>>>> >>>>>> -

Re: [vpp-dev] continuous decline in throughput with acl

2018-09-25 Thread emma sdi
>> >>>>>> >>>>>> On Sun, Sep 23, 2018 at 6:57 PM Andrew  Yourtchenko < >>>>>> ayour...@gmail.com> wrote: >>>>>> >>>>>>> Interesting - but you are saying in 1804 this effect is not observed >>&g

[vpp-dev] continuous decline in throughput with acl

2018-09-23 Thread emma sdi
Dear Community I have simple configuration as following: startup.conf simple_acl I used Trex packet generator with following command: ./t-rex-64 --cfg cfg/trex_config.yaml -f cap2/sfr.yaml -m 5 -c 2 -d 6000 The

Re: [vpp-dev] continuous decline in throughput with acl

2018-09-23 Thread emma sdi
: > Would you be able to confirm that it changes at a point of > https://gerrit.fd.io/r/#/c/12770/ ? > > --a > > On 23 Sep 2018, at 13:31, emma sdi wrote: > > Dear Community > > I have simple configuration as following: > > startup.conf <https://paste.ubuntu

[vpp-dev] nat: specify a pool for an outgoing interface

2019-01-01 Thread emma sdi
Dear VPP I want to configure a simple nat topology with 3 interfaces: ===in ==out GigabitEthernet3/0/0 GigabitEthernetb/0/0 GigabitEthernet1b/0/0 I want to source nat all packets going out form GigabitEthernetb/0/0 to ip address of

Re: [vpp-dev] nat: specify a pool for an outgoing interface

2019-01-06 Thread emma sdi
addresses only packets from different VRF > https://wiki.fd.io/view/VPP/NAT#NAT44_add_pool_address_for_specific_tenant > > > > Matus > > > > > > *From:* vpp-dev@lists.fd.io *On Behalf Of *emma sdi > *Sent:* Tuesday, January 1, 2019 9:10 AM > *To:* vpp-dev >

Re: [SUSPECTED SPAM] [vpp-dev] IPFIX Nat Logging

2019-01-05 Thread emma sdi
atus > > > > > > *From:* vpp-dev@lists.fd.io *On Behalf Of *emma sdi > *Sent:* Tuesday, December 25, 2018 1:49 PM > *To:* vpp-dev > *Subject:* [SUSPECTED SPAM] [vpp-dev] IPFIX Nat Logging > > > > Dear Vpp, > > I'd just configured a simple snat to chec

Re: [vpp-dev] Difference between python api and vppctl result - stats

2018-12-23 Thread emma sdi
. Best Regards Chore On Sun, Dec 23, 2018 at 9:24 AM Ole Troan wrote: > Hi there, > > On 23 Dec 2018, at 13:53, emma sdi wrote: > > Hi Dear Ole > I used vpp_papi.vpp_stats but still it is necessary to call > "collect_detailed_interface_stats API" to collect rx-unica

Re: [vpp-dev] Difference between python api and vppctl result - stats

2018-12-23 Thread emma sdi
Hi Dear VPP I was testing interface stats in VPP 18.10. It seems that it is needed to use collect_detailed_interface_stats API for every interface to register collector node. As a result, all combined counters such as unicast and broadcast start to work. My question is, why this collector is not

Re: [vpp-dev] Difference between python api and vppctl result - stats

2018-12-23 Thread emma sdi
Hi Dear Ole I used vpp_papi.vpp_stats but still it is necessary to call "collect_detailed_interface_stats API" to collect rx-unicast, tx-unicast and etc. why the detailed_interface_stats collector node is not available by default? On Sun, Dec 23, 2018 at 8:11 AM Ole Troan wrote: > Hello, > >

Re: [vpp-dev] Difference between python api and vppctl result - stats

2018-12-24 Thread emma sdi
multicast > or broadcast requires extra checks in the data-plane that do not come for > free. So, to able these extra stats, and hence incur the extra cost, is an > optional feature. > > > > Regards, > > neale > > > > *De : * au nom de emma sdi > *Date :

[vpp-dev] IPFIX Nat Logging

2018-12-25 Thread emma sdi
Dear Vpp, I'd just configured a simple snat to check logging nat with IPFIX.I have a router which vpp installed and two debian machines which one of them is IPFIX collector. Here it is the topology: https://paste.ubuntu.com/p/Hb9vpPhq67/ The IPFIX collector software is nfcapd. I defined a ip

[vpp-dev] ip_source_check_api

2018-11-27 Thread emma sdi
Dear VPP I added ip_source_check_interface_add_del: https://gerrit.fd.io/r/#/c/16241/ it would be appreciated if you review this commit. Best Regards Chore -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11441):

[vpp-dev] Request to Reviewing of Getting max connection table entries in vpp ACL

2018-12-10 Thread emma sdi
Dear vpp, I'd just written an api with vpp_api_test in vpp ACL plugin in order to get max connection table entries. Here it is the link of gerrit: https://gerrit.fd.io/r/#/c/16411/ I think it can be useful. I appreciate if somebody reviews it. Thanks Regards -=-=-=-=-=-=-=-=-=-=-=- Links:

[vpp-dev] ip_reassembly_enable_disable api

2018-12-01 Thread emma sdi
Dear VPP https://gerrit.fd.io/r/#/c/16236/ is updated(i had an unknown problem with build and it was fixed after rebased). it would be appreciated if Klement Sekera review this commit again. Best Regards Chore -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group.

[vpp-dev] ip_reassembly_enable_disable

2018-11-27 Thread emma sdi
Dear VPP I pushed my suggestion code about adding ip_reassembly_enable_disable vat command and its handler name in gerrit id 16201. It would be appreciated if you review. Best Regards Chore -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11423):

Re: [vpp-dev] nat: specify a pool for an outgoing interface

2019-01-07 Thread emma sdi
can translate to different addresses only packets from different VRF > https://wiki.fd.io/view/VPP/NAT#NAT44_add_pool_address_for_specific_tenant > > > > Matus > > > > > > *From:* vpp-dev@lists.fd.io *On Behalf Of *emma sdi > *Sent:* Tuesday,

Re: [SUSPECTED SPAM] [vpp-dev] IPFIX Nat Logging

2019-01-13 Thread emma sdi
Dear Ole, I checked ipfix by flowprobe and I have the same problem again. I don't see any ipfix packets log in "vppctl sh trace".   vppctl sh error    Count    Node  Reason 14802    null-node   blackholed packets     13

Re: [SUSPECTED SPAM] [vpp-dev] IPFIX Nat Logging

2019-01-09 Thread emma sdi
Dear Fabian, Unfortunately, the collector counter is still stuck to zero (no ipfix packets received). I would like to check my topology and configuration with you. thanks in advance. Client1 IP: 2.2.2.2 Client2 IP: 3.3.3.3 DUT config: GigabitEthernet3/0/0 (up):   L3 2.2.2.1/24

Re: [SUSPECTED SPAM] [vpp-dev] IPFIX Nat Logging

2019-01-09 Thread emma sdi
gt; > > Matus > > > > > > *From:* vpp-dev@lists.fd.io *On Behalf Of *emma sdi > *Sent:* Wednesday, January 9, 2019 12:32 PM > *To:* vpp-dev@lists.fd.io > *Subject:* Re: [SUSPECTED SPAM] [vpp-dev] IPFIX Nat Logging > > > > Dear Fabian, > > Unfort

Re: [SUSPECTED SPAM] [vpp-dev] IPFIX Nat Logging

2019-01-09 Thread emma sdi
t; > > > IPfix events are aggregated, to send it immediately use “ipfix flush" > > > > Matus > > > > > > *From:* vpp-dev@lists.fd.io *On Behalf Of *emma sdi > *Sent:* Tuesday, December 25, 2018 1:49 PM > *To:* vpp-dev > *Subject:* [SUSPECTED SPAM] [vpp

Re: [SUSPECTED SPAM] [vpp-dev] IPFIX Nat Logging

2019-01-09 Thread emma sdi
a - PANTHEON > TECHNOLOGIES at Cisco) < matfa...@cisco.com > wrote: > > > >> >> >> Hi, >> >> >> >>   >> >> >> >> IPfix events are aggregated, to send it immediately use “ipfix flush" >> >> >> >

Re: [SUSPECTED SPAM] [vpp-dev] IPFIX Nat Logging

2019-01-14 Thread emma sdi
Dear Ole, No I don't see any things on the wire. Ipfix logging doesn't work in flowprobe and nat mode for me . my vpp version is stable18.10  Warm Regards, -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11910):

[vpp-dev] setting speed and duplex in dpdk_update_link_state

2018-09-17 Thread emma sdi
Dear vpp we set speed and duplex In function dpdk_update_link_state, although When link_status is down, speed and duplex is unknown. So I commit my suggestion in gerrit id 14848 . Cheers -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this

Re: [vpp-dev] Multi Core Nat

2019-03-25 Thread emma sdi
[Edited Message Follows] Dear VPP Folks. I have the same problem too. Nat in multi- core mode doesn't show the correct functionality same as single core. when nat is configured, the connections between my two clients in two sides of dut are stopped. In fact, no packet is passed by DUT.  I

Re: [vpp-dev] Multi Core Nat

2019-03-25 Thread emma sdi
Dear VPP Folks. I have the same problem too. Nat in multi- core mode doesn't show the correct functionality same as single core. when nat is configured, connections between my two clients in two side of dut are stopped. I checked the trace of packets. it seems the packets after walking some

[vpp-dev] NAT New Node Registration

2019-03-05 Thread emma sdi
Hi Dear VPP I need to perform out2in nat operation before input-acl, so I'm trying to register new node as below: - plugins/nat/nat.c: VNET_FEATURE_INIT (ip4_snat_out2in_input, static) = { .arc_name = "ip4-unicast", .node_name = "nat44-out2in-input",

Re: [vpp-dev] Problem in using NAT and ABF plugin together

2019-06-25 Thread emma sdi
Hi Berna I think i fixed your problem: https://gerrit.fd.io/r/#/c/20308/ would you please check this code and send feedback?! On Wed, May 1, 2019 at 4:44 PM Berna Demir wrote: > Hi > > I configured my vpp v19.08-rc0~134-g1f8eeb7cb~b72 with 2 abf policy and 1 > static mapping in the following

[vpp-dev] NAT dose not exist in MAINTAINERS file

2019-06-25 Thread emma sdi
Hi Dear VPP Why NAT dose not exist in MAINTAINERS file?! as you know, during build phase, commit message subject is inspected inside MAINTAINERS file. so how could be possible a succeeded commit in NAT plug-in? Thanks Chore -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this

[vpp-dev] test_quic.py style is fixed

2019-06-25 Thread emma sdi
Hi Dear VPP Every build in Jenkins were failed because of test_quic.py style problem. I fixed this as you see below: https://gerrit.fd.io/r/#/c/20317/ It would be appreciated if you review this code. Thanks Chore -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group.

Re: [vpp-dev] NAT dose not exist in MAINTAINERS file

2019-06-25 Thread emma sdi
t; add Ole as maintainer... > > — > Damjan > > > On Jun 25, 2019, at 2:20 PM, emma sdi wrote: > > > > Hi Dear VPP > > > > Why NAT dose not exist in MAINTAINERS file?! as you know, during build > phase, commit message subject is inspected inside MAINTA

[vpp-dev] vac_client_constructor memory allocation

2019-11-30 Thread emma sdi
Hi The function vac_client_constructor allocates 1 GB of memory in every binary which linked to the vlibmemoryclient library. I have limited memory in my test machine. Is there any way to resolve this issue? Regards, Khers -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this

Re: [vpp-dev] vac_client_constructor memory allocation

2019-12-02 Thread emma sdi
You are right, The clib_mem_init call mmap to allocate virtual memory, but I got random SIGSEGV for the client program When the running machine has low memory ( about 2G ). So I changed client.c and memory_client.c to allocate lower virtual memory and only when I call connect_to_vlib. The

[vpp-dev] VPP Route Management vs Kernel

2019-11-04 Thread emma sdi
Hi Dear VPP There is a difference between VPP and Kernel's behavior in route management. For instance, consider that a route like "ip route add x.x.x.x/x via y.y.y.y dev ethX" is added. It would be deleted when ethx down performed in Kernel. While The VPP behavior is that forwarding of that route

Re: [vpp-dev] VPP Route Management vs Kernel

2019-11-04 Thread emma sdi
n would violate this > model. > > > > Regards, > > neale > > > > > > *From: * on behalf of emma sdi < > s3m2e1.6s...@gmail.com> > *Date: *Monday 4 November 2019 at 11:07 > *To: *vpp-dev > *Subject: *[vpp-dev] VPP Route Management vs Kernel