Dear FD.io-ers,
I know some of you may have missed this announce on openstack mailing list.
Regards,
Jerome
De : Ian Wells
Répondre à : "OpenStack Development Mailing List (not for usage questions)"
Date : lundi 31 juillet 2017 à 01:07
À : OpenStack Development Mailing List
Objet : [openstack-
Hi All,
In the absence of any objections I have done:
https://gerrit.fd.io/r/#/c/7819/
I’ll have a crack at the necessary CSIT changes. Is this:
https://wiki.fd.io/view/CSIT/Tutorials/Vagrant/Virtualbox/Ubuntu
still the recommended way to test CSIT code changes?
Thanks,
neale
From: Dave Wa
Hi Hamid,
Yes, we do have a userspace TCP stack but it is still under development. You
can find examples of external apps here [1] and internal apps here [2, 3].
All of these use the binary api to interact with the session layer code. We’ll
soon publish a wrapper library that should make inte
Hi All,
I would like to propose the addition of a dedicated SW interface event message
type rather than overload the set flags request. The over-loading of types
causes problems for the automatic API generation tools.
https://gerrit.fd.io/r/#/c/7925/
regards,
neale
_
You are passing each packet twice trough hostvpp so effectivelly your
hostvpp performance is 8 Mpps per core.
There are several factors which can impact performance (cpu speed, numa
locality, memory channel utilisation) but still you cannot expect order of
magnitude better numbers.
Best performan
Hi ,
I am insterested in connecting two instance of vpp by memif (one vpp is
running in host and another vpp is running in a lxc container).
I have achieved functionality goal by memif but I have problem in
performance test.
I have done the following steps respectively:
1. first of all I installed
These are test codes. No warranty, express or implied. It would take a few
minutes to unify the two API message handlers, but at some point in the near
future, I’m going to clean up the API client registration nonsense involved.
As you might imagine, it’s easy to make direct calls from within vp
Hi,
I trying to use mTCP in my application but it is very primitive, for example it
permits only one epoll instance and does not have shutdown API equivalent. Does
VPP support user space TCP stack similar to mTCP? if so where is a sample
application using it?
Thanks,
hamid
___
Hi John,
ACL feature is working after setting IP address on the sub-interface.
Thanks for help
Regards,
Balaji
On Fri, Aug 4, 2017 at 10:24 PM, John Lo (loj) wrote:
> Hi Balaj
>
>
>
> I think the problem is that you did not configure an IP address on the
> sub-interface. Thus, IP4 forwarding i
Hi John,
Thanks for quick response.
I tried as you suggested to associate input ACL on IP-forwarding path for
tagged packets. Ingress packets are not hitting ACL node and are dropped.
However ACL with src/dst IP, MAC address, udp port numbers are fine.
*Following are the configuration steps follo
Hi Dave, Thanks for your inputs..
I just experimenting based on your inputs.. when i push < 7Gbps i don't see
any packet losses on dpdk side and the vector/call is still ~1.1. As you
mentioned <2.0,means has lot of room left..
then i just increased the traffic to achieve bit rate of about 8gbps to
Hi,
Can these two commands(test http server 、 test tcp server) be configured
together?
When I configure the two commands at the same time, there is a multiple
registrations shown below:
DBGvpp# test tcp server
DBGvpp# test http server
0: vl_msg_api_config:682: BUG: multiple registrations of
In general, there is no limit to the number of MPLS labels. If you can be more
specific about what you are referring to (i.e. how many labels can be pushed
per LSP, or how many MPLS lookups/pops per packet) then I can give you a more
definitive answer.
/neale
-Original Message-
From:
Tell me please, how many MPLS-labels on the stack supports VPP?
07.08.2017, 07:25, "Kinsella, Ray" :
> Thanks !
>
> On 31/07/2017 13:51, Neale Ranns (nranns) wrote:
>> Hi Chris,
>>
>> Thanks for fixing it!
>> Release notes now available at:
>> https://docs.fd.io/vpp/17.07/release_notes_1707.
14 matches
Mail list logo