hi, vpp expert
im use vpp hoststack for java netty
but it segmentfault, reason is epoll use svm_fifo_t is null
can u give some suggestion,thank u
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21420): https://lists.fd.io/g/vpp-dev/message/21420
hi,vpp expert
now im use vpp hoststack for udp, i meet some problems
1.udp socket must use connect function, if user sendto will cause ip
address not connect error
2.if i use udp socket send packet biger than 1500, udp will split many
packet, is some method let it dont split
3.startup.conf set
-- Forwarded message -
发件人: NUAA无痕
Date: 2022年5月25日周三 16:37
Subject: Re: [vpp-dev] hoststack-java netty segmentfault
To: Florin Coras
hi, Florin
yes, im use LDP + java, i hava tested java socket, it works fine!
i think jvm also use libc.so, so i guess java socket will translate
u biger than 1500, i still send "hello" and
cannot stop, the count always increase and server also can receive it
if mtu less than 1500 ,it is ok
by the way, my nic that vpp manage mtu is 9000, startup.conf udp { mtu
9000 } will cause this question
best
Florin Coras 于2022年5月22日周日 06:57写道:
done properly.
>
> Regards,
> Florin
>
> > On May 27, 2022, at 12:53 AM, NUAA无痕 wrote:
> >
> > Hi, vpp experts
> >
> > im use vpp 2101 version
> > my project use jvpp communicate with vpp by binary api, but now when
> long time run(about 14h)
t on the condvar or the broadcast was missed by
> vpp.
>
> Try to check what your api client is doing. That might shed some light on
> the issue.
>
> Hope this helps.
>
> Regards,
> Florin
>
> On May 27, 2022, at 7:57 PM, NUAA无痕 wrote:
>
> Hi,florin
>
> I would
Hi, florin
About this question, i compare c++ code with jvpp code, then i found that
jvpp maybe have a bug and even if update vpp also cannot resolve it
jvpp code according to vpp version 1901, that has jvpp example
= (i8 *) (>data[0] + q->elsize * q->tail);
clib_memcpy_fast (tailp, elem, q->elsize);
q->tail++;
q->cursize++;
need_broadcast = (q->cursize == 1);
if (q->tail == q->maxsize)
q->tail = 0;
if (need_broadcast)
svm_queue_send_signal_inline (q, 1);
;
> Regards,
> Florin
>
> > On Jun 3, 2022, at 7:55 AM, NUAA无痕 wrote:
> >
> > Hi, florin
> >
> > About this question, i compare c++ code with jvpp code, then i found
> that jvpp maybe have a bug and even if update vpp also cannot resolve it
> >
> &g
Hi, ben
Thanks for your hint, i used vlib_buffer_clone() to copy packet, but the
performance is drop
i found that vlib_buffer_clone is use vlib_buffer_alloc(),is there other
way to improve performance?
by the way,i also used rte_mbuf_from_vlib_buffer() to use dpdk‘s rte_mbuf
to use dpdk buf
Hi, vpp experts
im study qos, but i found that hqos is not support, do you have plan for
support it?
if i want use vpp qos, can you give me some suggesions?
best regards
wanghe
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21832):
Hi, vpp experts
my vpp version is 22.06
vcl.conf
{
...
multi-thread-workers
}
im use multi thread program with LDP hoststack, vcl.conf config
'multi-thread-workers',
my program will listen many ports, but now program run will error
error message:
vls_mt_session_migrate:1065 failed
Hi, vpp experts
when i use bihash_init_8_8 ( p, "test",0,0)function in arm64 machine, it
will cause abort()
gcc version is 9.3.0
the reason is when buckets is 0, src/vppinfra/clib.h min_log2() function
will use count_leading_zero(0), but return value is different between x86
and arm64,x86 is 63
Hi,vpp experts
I have a task that send one packet to three NIC
my method is that use "vlib_buffer_alloc" function to copy three buffer,but
this method is poor performance,about that if vpp is 10G bps, once copy
will reduce 1.5G bps
is vpp has like dpdk buffer use refcnt count to avoid copy
Hi,vpp experts
I found that vpp remove dpdk config vlan-strip-offload in 2021 Nov 12 ,can
you explain why you remove it? thanks
I need this function,so if i can recover this code in vpp-2206 version ?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online
Hi, Xiaodong
IXIA test RFC 2460 4.5section
https://www.rfc-editor.org/rfc/rfc2460#section-4.5
the software will send two packet, software configure MTU is 1500, and it
will send two 768 size icmpv6 request fragment packet
it expect receive two reply packet
centos7 can receive two packet, but
Hi, vpp experts
Now i meet a problem that use IXIA software to test ipv6 support
RFC2460 point out that all machine support ipv6 fragment must set MTU
cannot less than 1280,
but IXIA test ipv6 fragment will send two request frag packet that data
size is 768 and expect receive two reply packet,
Hi,vpp experts
Im test vpp different plugins node count performance, but something strange
happened
test environment is use vpp packet-generator to produce 128 size packet
with 10G network
use vpp make-plugin.sh to create 10 dual type plugins node
the first scene is:
ip4-input -> test-node
planation), I'm curious to know why you need this function. Can you
> describe what you're trying to accomplish with VPP? Perhaps there is a
> valid/contemporary configuration possible that accomplishes the goal
> without needing vlan stripping.
>
> groet,
> Pim
>
> On Wed, Nov 2,
19 matches
Mail list logo