Re: [vpp-dev] ipv6 fragment problem

2022-12-14 Thread NUAA
Hi, Xiaodong
IXIA test RFC 2460 4.5section
https://www.rfc-editor.org/rfc/rfc2460#section-4.5

the software will send two packet, software configure MTU is 1500, and it
will send two 768 size icmpv6 request fragment packet

it expect receive two reply packet

centos7 can receive two packet, but when receive the first packet it will
reply  a packet (wireshark show "parameter problem  erroneous header field
encountered")
so i think if centos7 no support this function

IXIA show send two packet but not receive two expected packet

If there has a ipv6 configure in system i dont know, thank you

Best regards
wanghe

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22336): https://lists.fd.io/g/vpp-dev/message/22336
Mute This Topic: https://lists.fd.io/mt/95681811/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] ipv6 fragment problem

2022-12-14 Thread NUAA
Hi, vpp experts
Now i meet a problem that use IXIA software to test ipv6 support

RFC2460 point out that all machine support ipv6 fragment must set MTU
cannot less than 1280,
but IXIA test ipv6 fragment will send two request frag packet that data
size is 768 and expect receive two reply packet, but our system(centos7)
only reply an ICMP error packet, because request packet size is less than
1280

Is vpp support ipv6 fragment size less than 1280?
In addition that has some configure in centos7 can support this?

I m unfamiliar with ipv6, looking forward your help!

Best regards
wanghe

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22333): https://lists.fd.io/g/vpp-dev/message/22333
Mute This Topic: https://lists.fd.io/mt/95681811/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] vpp node performance

2022-11-14 Thread NUAA
Hi,vpp experts

Im test vpp different plugins node count performance, but something strange
happened

test environment is use vpp packet-generator to produce 128 size packet
with 10G network

use vpp make-plugin.sh to create 10 dual type plugins node

the first scene is:
ip4-input  ->  test-node -> interface-output
the performance is 7.1G

the second scene is:
ip4-input -> test1-node -> test2-node -> interface-output
the performance is 5.1G

the third scene is:
ip4-input -> test1-node -> test2-node -> test3-node -> interface-output
the performance is 7.1G

the fourth scene is:
ip4-input -> test1-node -> test2-node -> test3-node -> test4-node ->
interface-output
the performance is 4.8G

. like this five six and so on ten

when add node count is odd, the performance is good
but is even,the performance

the added node only translate packet to next node, nothing to do other

can you explain why is this ?Thank you

Best regards
wanghe

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22180): https://lists.fd.io/g/vpp-dev/message/22180
Mute This Topic: https://lists.fd.io/mt/95038133/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp vlan

2022-11-02 Thread NUAA
Hi,Pim
I use vpp to receive various packets and send them to different interface
by rules.
Vpp will drop vlan packet because cannot find vlan table, sometimes we not
need vlan label. When vpp install in LAN,for example company,our ip is
different so we can strip vlan lable
another method is in ethernet node , if packet is vlan so can use
vlib_buffer_advance function to skip vlan





Pim van Pelt  于2022年11月2日周三 17:45写道:

> Hoi,
>
> While I'll leave other folks to comment on why the feature was removed
> (although https://gerrit.fd.io/r/c/vpp/+/34822 does give a good
> explanation), I'm curious to know why you need this function. Can you
> describe what you're trying to accomplish with VPP? Perhaps there is a
> valid/contemporary configuration possible that accomplishes the goal
> without needing vlan stripping.
>
> groet,
> Pim
>
> On Wed, Nov 2, 2022 at 10:04 AM NUAA无痕  wrote:
>
>> Hi,vpp experts
>> I found that vpp remove dpdk config vlan-strip-offload in 2021 Nov 12
>> ,can you explain why you remove it? thanks
>>
>> I need this function,so if i can recover this code in vpp-2206 version ?
>>
>>
>>
>>
>
> --
> Pim van Pelt 
> PBVP1-RIPE - http://www.ipng.nl/
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22113): https://lists.fd.io/g/vpp-dev/message/22113
Mute This Topic: https://lists.fd.io/mt/94730061/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] vpp vlan

2022-11-02 Thread NUAA
Hi,vpp experts
I found that vpp remove dpdk config vlan-strip-offload in 2021 Nov 12 ,can
you explain why you remove it? thanks

I need this function,so if i can recover this code in vpp-2206 version ?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22111): https://lists.fd.io/g/vpp-dev/message/22111
Mute This Topic: https://lists.fd.io/mt/94730061/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] vpp-2206 bug

2022-10-09 Thread NUAA
Hi, vpp experts

when i use bihash_init_8_8 ( p, "test",0,0)function in arm64 machine, it
will cause abort()
gcc version is 9.3.0

the reason is when buckets is 0, src/vppinfra/clib.h  min_log2() function
will use count_leading_zero(0), but return value is different between x86
and arm64,x86 is 63 but arm64 is 64, it is a undefined behavior for
compiler, so min_log2() need add a judgement with it

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21979): https://lists.fd.io/g/vpp-dev/message/21979
Mute This Topic: https://lists.fd.io/mt/94229490/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP LDP ERROR

2022-08-30 Thread NUAA
Hi, vpp experts
my vpp version is 22.06

vcl.conf
{
...
multi-thread-workers
}

im use multi thread program with LDP hoststack, vcl.conf config
'multi-thread-workers',
my program will listen many ports, but now program run will error

error message:
vls_mt_session_migrate:1065 failed to wait rpc response

can you give your suggestion?

best regards
wanghe

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21834): https://lists.fd.io/g/vpp-dev/message/21834
Mute This Topic: https://lists.fd.io/mt/93345198/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] About vpp qos

2022-08-29 Thread NUAA
Hi, vpp experts
im study qos, but i found that hqos is not support, do you have plan for
support it?
if i want use vpp qos, can you give me some suggesions?

best  regards
wanghe

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21832): https://lists.fd.io/g/vpp-dev/message/21832
Mute This Topic: https://lists.fd.io/mt/93341609/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Copy packet forwarding performance issues

2022-08-02 Thread NUAA
Hi, ben
Thanks for your hint, i used vlib_buffer_clone() to copy packet, but the
performance is drop
i found that vlib_buffer_clone is use vlib_buffer_alloc(),is there other
way to improve performance?

by the way,i also used rte_mbuf_from_vlib_buffer() to use dpdk‘s rte_mbuf
to use dpdk buf refcnt, but this way need to copy vpp's buffer to dpdk
buffer,
can vpp use dpdk's rte_mbuf directly?

best
wanghe

Benoit Ganne (bganne) via lists.fd.io 
于2022年8月1日周一 15:08写道:

> You probably want to use vlib_buffer_clone() instead.
>
> Best
> ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of NUAA??
> > Sent: Monday, August 1, 2022 4:53
> > To: vpp-dev@lists.fd.io
> > Subject: [vpp-dev] Copy packet forwarding performance issues
> >
> > Hi,vpp experts
> > I have a task that send one packet to three NIC
> > my method is that use "vlib_buffer_alloc" function to copy three buffer,
> > but this method is poor performance,about that if vpp is 10G bps, once
> > copy will reduce 1.5G bps
> >
> > is vpp has like dpdk buffer use refcnt count to avoid copy buffer? can
> > you give me some suggestions to send packet with high performence?
> >
> > Best wish
> > wanghe
> >
> >
>
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21757): https://lists.fd.io/g/vpp-dev/message/21757
Mute This Topic: https://lists.fd.io/mt/92739833/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Copy packet forwarding performance issues

2022-07-31 Thread NUAA
Hi,vpp experts
I have a task that send one packet to three NIC
my method is that use "vlib_buffer_alloc" function to copy three buffer,but
this method is poor performance,about that if vpp is 10G bps, once copy
will reduce 1.5G bps

is vpp has like dpdk buffer use refcnt count to avoid copy buffer? can you
give me some suggestions to send packet with high performence?

Best wish
wanghe

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21747): https://lists.fd.io/g/vpp-dev/message/21747
Mute This Topic: https://lists.fd.io/mt/92739833/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] api msg deadlock

2022-06-06 Thread NUAA
Hi ,florin

if i add many message once in a while, svm_queue_add will deadlock due to
q->cursize == q->maxsize?
Maybe this reason cause deadlock

int
svm_queue_add (svm_queue_t * q, u8 * elem, int nowait)
{
  i8 *tailp;
  int need_broadcast = 0;

  if (nowait)
{
  /* zero on success */
  if (svm_queue_trylock (q))
{
 return (-1);
}
}
  else
svm_queue_lock (q);

  if (PREDICT_FALSE (q->cursize == q->maxsize))
{
  if (nowait)
{
 svm_queue_unlock (q);
 return (-2);
}
  while (q->cursize == q->maxsize)
svm_queue_wait_inline (q);
}

  tailp = (i8 *) (>data[0] + q->elsize * q->tail);
  clib_memcpy_fast (tailp, elem, q->elsize);

  q->tail++;
  q->cursize++;

  need_broadcast = (q->cursize == 1);

  if (q->tail == q->maxsize)
q->tail = 0;

  if (need_broadcast)
svm_queue_send_signal_inline (q, 1);

  svm_queue_unlock (q);

  return 0;
}

NUAA无痕 via lists.fd.io  于2022年6月6日周一
14:52写道:

> Hi, florin
>
> i have study jvpp code and found that maybe jvpp is ok, error occurs in
> vpp svm_queue
>
> jvpp connect vpp by vl_client_connect_to_vlib function
> (vpp/src/vlibmemory/memory_client.c)
>
> int
> vl_client_connect_to_vlib (const char *svm_name,
>   const char *client_name, int rx_queue_size)
> {
>   return connect_to_vlib_internal (svm_name, client_name, rx_queue_size,
>   rx_thread_fn, 0 /* thread fn arg */ ,
>   1 /* do map */ );
> }
>
> jvpp example is
>  jvpp-registry/jvpp_registry.c:if
> (vl_client_connect_to_vlib(shm_prefix, name, 32) < 0)
>
> i change 32 to 4096, now vpp has run three days and not deadlock
>
> so i think that whether rx_queue_size parameter is too small, resulting in
> function error handling
>
> I also compare vpp-2101 with vpp-2206 and found that binary api code(
> svm_queue in queue.c ) is no change
>
> If i send a large message that size over rx_queue_size will cause problem
> ?
> Looking forward to your opinion
>
> Best regards
> wanghe
>
> Florin Coras  于2022年6月3日周五 23:28写道:
>
>> Hi Wanghe,
>>
>> The only api bindings supported today are c, python and golang. Maybe
>> somebody familiar with the jvpp code can help you out but otherwise I’d
>> recommend to switch if possible.
>>
>> Regards,
>> Florin
>>
>> > On Jun 3, 2022, at 7:55 AM, NUAA无痕  wrote:
>> >
>> > Hi, florin
>> >
>> > About this question, i compare c++ code with jvpp code, then i found
>> that jvpp maybe have a bug and even if update vpp also cannot resolve it
>> >
>> > jvpp code according to vpp version 1901, that has jvpp example
>> >
>> vpp-1901/extras/japi/java/jvpp-core/io/fd/vpp/jvpp/core/examples/CreateSubInterfaceExample.java
>> has jvpp example and our code according to it
>> >
>> > now vpp version is 2101
>> > then when java code connected to vpp and then use "close“ function it
>> will hint "peer unresponsive, give up"
>> > this error from src/vlibmemory/memory_client.c vl_client_disconnect
>> function
>> >
>> > why this error is that svm_queue_sub always return -2 until timeout
>> >
>> > this is code , the reason is that "vl_input_queue->cursize == 0 " and
>> vl_input_queue->head == vl_input_queue->tail
>> >
>> > int
>> > vl_client_disconnect (void)
>> > {
>> >   vl_api_memclnt_delete_reply_t *rp;
>> >   svm_queue_t *vl_input_queue;
>> >   api_main_t *am = vlibapi_get_main ();
>> >   time_t begin;
>> >
>> >   vl_input_queue = am->vl_input_queue;
>> >   vl_client_send_disconnect (0 /* wait for reply */ );
>> >
>> >   /*
>> >* Have to be careful here, in case the client is disconnecting
>> >* because e.g. the vlib process died, or is unresponsive.
>> >*/
>> >   begin = time (0);
>> >   while (1)
>> > {
>> >   time_t now;
>> >
>> >   now = time (0);
>> >
>> >   if (now >= (begin + 2))
>> > {
>> >  clib_warning ("peer unresponsive, give up");
>> >  am->my_client_index = ~0;
>> >  am->my_registration = 0;
>> >  am->shmem_hdr = 0;
>> >  return -1;
>> > }
>> >
>> > /* this error because vl_input_queue->cursize == 0  */
>> >   if (svm_queue_sub (vl_input_queue, (u8 *) & rp, SVM_Q_NOWAIT, 0)
>> < 0)
>> > continue;
>> >
>> >   VL_MSG_API_UNPOISON (rp);
>> >
>> >   /* drain the queue */
>> >   if (ntohs (rp->_vl_m

Re: [vpp-dev] api msg deadlock

2022-06-06 Thread NUAA
Hi, florin

i have study jvpp code and found that maybe jvpp is ok, error occurs in vpp
svm_queue

jvpp connect vpp by vl_client_connect_to_vlib function
(vpp/src/vlibmemory/memory_client.c)

int
vl_client_connect_to_vlib (const char *svm_name,
  const char *client_name, int rx_queue_size)
{
  return connect_to_vlib_internal (svm_name, client_name, rx_queue_size,
  rx_thread_fn, 0 /* thread fn arg */ ,
  1 /* do map */ );
}

jvpp example is
 jvpp-registry/jvpp_registry.c:if
(vl_client_connect_to_vlib(shm_prefix, name, 32) < 0)

i change 32 to 4096, now vpp has run three days and not deadlock

so i think that whether rx_queue_size parameter is too small, resulting in
function error handling

I also compare vpp-2101 with vpp-2206 and found that binary api code(
svm_queue in queue.c ) is no change

If i send a large message that size over rx_queue_size will cause problem ?
Looking forward to your opinion

Best regards
wanghe

Florin Coras  于2022年6月3日周五 23:28写道:

> Hi Wanghe,
>
> The only api bindings supported today are c, python and golang. Maybe
> somebody familiar with the jvpp code can help you out but otherwise I’d
> recommend to switch if possible.
>
> Regards,
> Florin
>
> > On Jun 3, 2022, at 7:55 AM, NUAA无痕  wrote:
> >
> > Hi, florin
> >
> > About this question, i compare c++ code with jvpp code, then i found
> that jvpp maybe have a bug and even if update vpp also cannot resolve it
> >
> > jvpp code according to vpp version 1901, that has jvpp example
> >
> vpp-1901/extras/japi/java/jvpp-core/io/fd/vpp/jvpp/core/examples/CreateSubInterfaceExample.java
> has jvpp example and our code according to it
> >
> > now vpp version is 2101
> > then when java code connected to vpp and then use "close“ function it
> will hint "peer unresponsive, give up"
> > this error from src/vlibmemory/memory_client.c vl_client_disconnect
> function
> >
> > why this error is that svm_queue_sub always return -2 until timeout
> >
> > this is code , the reason is that "vl_input_queue->cursize == 0 " and
> vl_input_queue->head == vl_input_queue->tail
> >
> > int
> > vl_client_disconnect (void)
> > {
> >   vl_api_memclnt_delete_reply_t *rp;
> >   svm_queue_t *vl_input_queue;
> >   api_main_t *am = vlibapi_get_main ();
> >   time_t begin;
> >
> >   vl_input_queue = am->vl_input_queue;
> >   vl_client_send_disconnect (0 /* wait for reply */ );
> >
> >   /*
> >* Have to be careful here, in case the client is disconnecting
> >* because e.g. the vlib process died, or is unresponsive.
> >*/
> >   begin = time (0);
> >   while (1)
> > {
> >   time_t now;
> >
> >   now = time (0);
> >
> >   if (now >= (begin + 2))
> > {
> >  clib_warning ("peer unresponsive, give up");
> >  am->my_client_index = ~0;
> >  am->my_registration = 0;
> >  am->shmem_hdr = 0;
> >  return -1;
> > }
> >
> > /* this error because vl_input_queue->cursize == 0  */
> >   if (svm_queue_sub (vl_input_queue, (u8 *) & rp, SVM_Q_NOWAIT, 0) <
> 0)
> > continue;
> >
> >   VL_MSG_API_UNPOISON (rp);
> >
> >   /* drain the queue */
> >   if (ntohs (rp->_vl_msg_id) != VL_API_MEMCLNT_DELETE_REPLY)
> > {
> >  clib_warning ("queue drain: %d", ntohs (rp->_vl_msg_id));
> >  vl_msg_api_handler ((void *) rp);
> >  continue;
> > }
> >   vl_msg_api_handler ((void *) rp);
> >   break;
> > }
> >
> >   vl_api_name_and_crc_free ();
> >   return 0;
> > }
> >
> > when i use c++ for vpp binary api,  vl_input_queue->cursize == 1 and
> vl_input_queue->head != vl_input_queue->tail
> >
> > so c++ use binary api is correct that about svm_queue_* series functions
> >
> > Although JVpp is no longer supported, but this is important for me!
> >
> > Can you give a patch for jvpp? Thanks
> >
> > Best regards
> >
> > Wanghe
> >
> >
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21507): https://lists.fd.io/g/vpp-dev/message/21507
Mute This Topic: https://lists.fd.io/mt/91372330/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] api msg deadlock

2022-06-03 Thread NUAA
Hi, florin

About this question, i compare c++ code with jvpp code, then i found that
jvpp maybe have a bug and even if update vpp also cannot resolve it

jvpp code according to vpp version 1901, that has jvpp example
vpp-1901/extras/japi/java/jvpp-core/io/fd/vpp/jvpp/core/examples/CreateSubInterfaceExample.java
has jvpp example and our code according to it

now vpp version is 2101
then when java code connected to vpp and then use "close“ function it will
hint "peer unresponsive, give up"
this error from src/vlibmemory/memory_client.c vl_client_disconnect function

why this error is that svm_queue_sub always return -2 until timeout

this is code , the reason is that "vl_input_queue->cursize == 0 " and
vl_input_queue->head == vl_input_queue->tail

int
vl_client_disconnect (void)
{
  vl_api_memclnt_delete_reply_t *rp;
  svm_queue_t *vl_input_queue;
  api_main_t *am = vlibapi_get_main ();
  time_t begin;

  vl_input_queue = am->vl_input_queue;
  vl_client_send_disconnect (0 /* wait for reply */ );

  /*
   * Have to be careful here, in case the client is disconnecting
   * because e.g. the vlib process died, or is unresponsive.
   */
  begin = time (0);
  while (1)
{
  time_t now;

  now = time (0);

  if (now >= (begin + 2))
{
 clib_warning ("peer unresponsive, give up");
 am->my_client_index = ~0;
 am->my_registration = 0;
 am->shmem_hdr = 0;
 return -1;
}

/* this error because vl_input_queue->cursize == 0  */
  if (svm_queue_sub (vl_input_queue, (u8 *) & rp, SVM_Q_NOWAIT, 0) < 0)
continue;

  VL_MSG_API_UNPOISON (rp);

  /* drain the queue */
  if (ntohs (rp->_vl_msg_id) != VL_API_MEMCLNT_DELETE_REPLY)
{
 clib_warning ("queue drain: %d", ntohs (rp->_vl_msg_id));
 vl_msg_api_handler ((void *) rp);
 continue;
}
  vl_msg_api_handler ((void *) rp);
  break;
}

  vl_api_name_and_crc_free ();
  return 0;
}

when i use c++ for vpp binary api,  vl_input_queue->cursize == 1 and
vl_input_queue->head != vl_input_queue->tail

so c++ use binary api is correct that about svm_queue_* series functions

Although JVpp is no longer supported, but this is important for me!

Can you give a patch for jvpp? Thanks

Best regards

Wanghe

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21500): https://lists.fd.io/g/vpp-dev/message/21500
Mute This Topic: https://lists.fd.io/mt/91372330/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] api msg deadlock

2022-05-29 Thread NUAA
Hi florin,
Thanks for your suggestion, i will update vpp to new version for this
question

In addition that is vpp has example for c++/java binary api use, so i can
compare it with our project code
our c++ code refer to https://pantheon.tech/vpp-101-plugins-binary-api/

Thank u very much

Best regards
wanghe

Florin Coras  于2022年5月29日周日 03:37写道:

> Hi wanghe,
>
> Neither vpp 21.01 nor jvpp are supported, so your only options are to try
> back porting fixes from newer versions, if any exist, or to debug the
> problem.
>
> As previously mentioned, the deadlock seem to be in a reply message, so
> the issue is probably in the java/c++ implementation of the binary api
> client or the way the api is used by the client. Either the api client is
> not dequeueing messages, e.g., maybe it’s stuck waiting on vpp, or, if did
> dequeue, it did not broadcast on the condvar or the broadcast was missed by
> vpp.
>
> Try to check what your api client is doing. That might shed some light on
> the issue.
>
> Hope this helps.
>
> Regards,
> Florin
>
> On May 27, 2022, at 7:57 PM, NUAA无痕  wrote:
>
> Hi,florin
>
> I would appreciate it if you can resolve it
>
> My project use java web to control vpp, many binary api are used, such as
> interfaces, ip, our customize api etc.
>
> half a year ago, i use one c++ restful framework ( google‘s pistache) to
> use vpp binary api, there also has deadlock problem , then i don't know to
> send mail for help (dont know must subscribe mail  can send mail to vpp-dev
> haha)
>
> i think maybe i m not proficient with c++ multithreading that cause
> deadlock, that time i fount if use one thread also api communication will
> deadlock
>
> so i think many times use api will cause vpp deadlock
>
> For resolve this problem, we decide to change it to jvpp, maybe jvpp
> resolve this deadlock (now this problem also exist)
>
> i found git log for vpp new version that vpp resolve a deadlock about svm
> (i dont know if it can resolve this), but now we update vpp need at least
> one month (maybe too long)
>
> So florin expert , can you analyze this problem?  thank you very much!
>
> Best regards
> wanghe
>
>
>
>
> Florin Coras  于2022年5月28日周六 01:02写道:
>
>> Hi wanghe,
>>
>> Unfortunately, jvpp is no longer supported so probably there’s no recent
>> fix for the issue you’re hitting. By the looks of it, an api msg handler is
>> trying to enqueue something (probably a reply towards the client) and ends
>> up stuck because the svm queue is full and a condvar broadcast never comes.
>>
>> If you really need to fix this, I’d check jvpp code to see if condvar
>> broadcasts on dequeue are done properly.
>>
>> Regards,
>> Florin
>>
>> > On May 27, 2022, at 12:53 AM, NUAA无痕  wrote:
>> >
>> > Hi, vpp experts
>> >
>> > im use vpp 2101 version
>> > my project use jvpp communicate with vpp by binary api, but now when
>> long time run(about 14h) it will deadlock, this is info
>> >
>> > 0x7f2687783a35 in pthread_cond_wait@@GLIBC_2.3.2 () from
>> /usr/lib64/libpthread.so.0
>> > (gdb) bt
>> > #0  0x7f2687783a35 in pthread_cond_wait@@GLIBC_2.3.2 () from
>> /usr/lib64/libpthread.so.0
>> > #1  0x7f2688f198e6 in svm_queue_add () from
>> /usr/local/zfp/lib/libsvm.so.21.01.1
>> > #2  0x55cdda6856f3 in ?? ()
>> > #3  0x7f268904c909 in vl_msg_api_handler_with_vm_node () from
>> /usr/local/zfp/lib/libvlibmemory.so.21.01.1
>> > #4  0x7f2689033521 in vl_mem_api_handle_msg_main () from
>> /usr/local/zfp/lib/libvlibmemory.so.21.01.1
>> > #5  0x7f2689043fce in ?? () from
>> /usr/local/zfp/lib/libvlibmemory.so.21.01.1
>> > #6  0x7f2688fba5a7 in ?? () from
>> /usr/local/zfp/lib/libvlib.so.21.01.1
>> > #7  0x7f2688ef8de0 in clib_calljmp () from
>> /usr/local/zfp/lib/libvppinfra.so.21.01.1
>> > #8  0x7f263d673dd0 in ?? ()
>> > #9  0x7f2688fbdf67 in ?? () from
>> /usr/local/zfp/lib/libvlib.so.21.01.1
>> > #10 0x in ?? ()
>> >
>> > because i use release version so some info is not show, i found that
>> vpp new version has change a lot about svm.
>> >
>> > for some reason,i need some time to update vpp and now must resolve
>> this problem
>> >
>> > so experts can you give patch for this bug for vpp 2101 version
>> >
>> > Best regards
>> > wanghe
>> >
>> > 
>> >
>>
>>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21481): https://lists.fd.io/g/vpp-dev/message/21481
Mute This Topic: https://lists.fd.io/mt/91372330/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] api msg deadlock

2022-05-27 Thread NUAA
Hi,florin

I would appreciate it if you can resolve it

My project use java web to control vpp, many binary api are used, such as
interfaces, ip, our customize api etc.

half a year ago, i use one c++ restful framework ( google‘s pistache) to
use vpp binary api, there also has deadlock problem , then i don't know to
send mail for help (dont know must subscribe mail  can send mail to vpp-dev
haha)

i think maybe i m not proficient with c++ multithreading that cause
deadlock, that time i fount if use one thread also api communication will
deadlock

so i think many times use api will cause vpp deadlock

For resolve this problem, we decide to change it to jvpp, maybe jvpp
resolve this deadlock (now this problem also exist)

i found git log for vpp new version that vpp resolve a deadlock about svm
(i dont know if it can resolve this), but now we update vpp need at least
one month (maybe too long)

So florin expert , can you analyze this problem?  thank you very much!

Best regards
wanghe




Florin Coras  于2022年5月28日周六 01:02写道:

> Hi wanghe,
>
> Unfortunately, jvpp is no longer supported so probably there’s no recent
> fix for the issue you’re hitting. By the looks of it, an api msg handler is
> trying to enqueue something (probably a reply towards the client) and ends
> up stuck because the svm queue is full and a condvar broadcast never comes.
>
> If you really need to fix this, I’d check jvpp code to see if condvar
> broadcasts on dequeue are done properly.
>
> Regards,
> Florin
>
> > On May 27, 2022, at 12:53 AM, NUAA无痕  wrote:
> >
> > Hi, vpp experts
> >
> > im use vpp 2101 version
> > my project use jvpp communicate with vpp by binary api, but now when
> long time run(about 14h) it will deadlock, this is info
> >
> > 0x7f2687783a35 in pthread_cond_wait@@GLIBC_2.3.2 () from
> /usr/lib64/libpthread.so.0
> > (gdb) bt
> > #0  0x7f2687783a35 in pthread_cond_wait@@GLIBC_2.3.2 () from
> /usr/lib64/libpthread.so.0
> > #1  0x7f2688f198e6 in svm_queue_add () from
> /usr/local/zfp/lib/libsvm.so.21.01.1
> > #2  0x55cdda6856f3 in ?? ()
> > #3  0x7f268904c909 in vl_msg_api_handler_with_vm_node () from
> /usr/local/zfp/lib/libvlibmemory.so.21.01.1
> > #4  0x7f2689033521 in vl_mem_api_handle_msg_main () from
> /usr/local/zfp/lib/libvlibmemory.so.21.01.1
> > #5  0x7f2689043fce in ?? () from
> /usr/local/zfp/lib/libvlibmemory.so.21.01.1
> > #6  0x7f2688fba5a7 in ?? () from
> /usr/local/zfp/lib/libvlib.so.21.01.1
> > #7  0x7f2688ef8de0 in clib_calljmp () from
> /usr/local/zfp/lib/libvppinfra.so.21.01.1
> > #8  0x7f263d673dd0 in ?? ()
> > #9  0x7f2688fbdf67 in ?? () from
> /usr/local/zfp/lib/libvlib.so.21.01.1
> > #10 0x in ?? ()
> >
> > because i use release version so some info is not show, i found that vpp
> new version has change a lot about svm.
> >
> > for some reason,i need some time to update vpp and now must resolve this
> problem
> >
> > so experts can you give patch for this bug for vpp 2101 version
> >
> > Best regards
> > wanghe
> >
> > 
> >
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21479): https://lists.fd.io/g/vpp-dev/message/21479
Mute This Topic: https://lists.fd.io/mt/91372330/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] hoststack-udp problems

2022-05-25 Thread NUAA
hi, Florin Coras
I may not have described clearly

im use vpp version 2101

1.sendto function problem
i use LDP for c socket program with udp
then i find if only use sendto function is error , this is client code

int main ()
{
int sockfd = socket (AF_INET, SOCK_DGRAM, IPPROTO_UDP);
struct sockdaddr_in server = {0};
server.sin_addr.s_addr = inet_addr (192.168.1.1);
server.sin_port = htons ();
server.sin_family = AF_INET;

char msg[] = "hello";
/* this is not same */
sendto (sockfd, msg, sizeof (msg), 0, (struct sockaddr *)   ,
sizeof (server));

close (sockfd);
return 0;
}

this code vpp will report remote ip cannot connect error

but if use connect function,like this

int main ()
{
int sockfd = socket (AF_INET, SOCK_DGRAM, IPPROTO_UDP);
struct sockdaddr_in server = {0};
server.sin_addr.s_addr = inet_addr (192.168.1.1);
server.sin_port = htons ();
server.sin_family = AF_INET;

char msg[] = "hello";
/* this is not same */
connect (sockfd, (struct sockaddr*), sizeof (server));
sendto (sockfd, msg, sizeof (msg), 0, (struct sockaddr *)NULL, sizeof
(server));

close (sockfd);
return 0;
}

ldp will work fine

2.i find vpp udp proto LDP also use mss, the code has set mss = 1500 - 20 -
8
i think udp packet if biger than MTU , will cause ip fragment rather than
like tcp user mss
(this is a good design)

but now i need send a 4K udp packet use ip fragment not use mss, because if
udp into mss size
i also need reassemble(i need data), if use ip fragment and reassemble, i
can get data easily

3."it cannot stop" is that client send "hello" once, if i use command "show
int", the packet will increase 1 and stop,
but i set startup.conf udp mtu biger than 1500, i still send "hello" and
cannot stop, the count always increase and server also can receive it
if mtu less than 1500 ,it is ok
by the way, my nic that vpp manage  mtu is 9000, startup.conf udp { mtu
9000 } will cause this question

best

Florin Coras  于2022年5月22日周日 06:57写道:

> Hi,
>
>
>
> > On May 20, 2022, at 2:31 AM, NUAA无痕  wrote:
> >
> > hi,vpp expert
> > now im use vpp hoststack for udp, i meet some problems
> >
> > 1.udp socket must use connect function, if user sendto will cause ip
> address not connect error
>
> What version of vpp are you using? Although we prefer connected udp for
> performance reasons, sendto should work. If socket was not connected/bound,
> vcl should connect it. What’s the exact error you’re getting and what are
> you trying to do?
>
> >
> > 2.if i use udp socket send packet biger than 1500, udp will split many
> packet, is some method let it dont split
>
> What exactly are you trying to achieve? Session layer chops datagrams into
> mss sized packets. If you’re trying to send large datagrams, up to nic mtu
> size, then as you did lower, increase udp mtu.
>
> >
> > 3.startup.conf set udp { mtu 9000 }, then use hoststack send one packet,
> it will always send packet and cannot stop, the mtu must less than 1500
>
> Not sure I understand what you mean by “it cannot stop”? If by chance
> you’re trying to force ip fragmentation, that’s not supported with udp
> sessions.
>
> Regards,
> Florin
>
> >
> > can u give some suggestions? than u
> >
> > 
> >
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21454): https://lists.fd.io/g/vpp-dev/message/21454
Mute This Topic: https://lists.fd.io/mt/91227351/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] hoststack-java netty segmentfault

2022-05-25 Thread NUAA
-- Forwarded message -
发件人: NUAA无痕 
Date: 2022年5月25日周三 16:37
Subject: Re: [vpp-dev] hoststack-java netty segmentfault
To: Florin Coras 


hi, Florin
yes, im use LDP + java, i hava tested java socket, it works fine!
i think jvm also use libc.so, so i guess java socket will translate to c
socket, it is

now i m use LDP for netty,but it segmentfault
vpp version is 2101

this is info

Thread 17 "nioEventLoopGro" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f977571d0 (LWP 7042)]
0x007fb7d998a8 in svm_fifo_del_want_deq_ntf (f=0x0, ntf_type=2 '\002')
at /home/wanghe/ZFP-2101/src/svm/svm_fifo.h:770
770   f->want_deq_ntf &= ~ntf_type;
(gdb) bt
#0  0x007fb7d998a8 in svm_fifo_del_want_deq_ntf (f=0x0, ntf_type=2
'\002') at /home/wanghe/ZFP-2101/src/svm/svm_fifo.h:770
#1  0x007fb7da6ba0 in vppcom_epoll_ctl (vep_handle=1, op=3,
session_handle=10, event=0x7f97755c68) at
/home/wanghe/ZFP-2101/src/vcl/vppcom.c:2740
#2  0x007fb7dc02f0 in vls_epoll_ctl (ep_vlsh=0, op=3, vlsh=9,
event=0x7f97755c68) at /home/wanghe/ZFP-2101/src/vcl/vcl_locked.c:1293
#3  0x007fb7fbc390 in epoll_ctl (epfd=128, op=3, fd=137,
event=0x7f97755c68) at /home/wanghe/ZFP-2101/src/vcl/ldp.c:2294
#4  0x007fa8135b1c in Java_sun_nio_ch_EPollArrayWrapper_epollCtl ()
from /usr/lib/jvm/java-8-openjdk-arm64/jre/lib/aarch64/libnio.so
#5  0x007f9c08f49c in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)


this may a bug, if vpp can support java, it will very good
i need to enhance java web performance

best wish

Florin Coras  于2022年5月22日周日 06:43写道:

> Hi,
>
> Are you trying to use LDP + java? I suspect that has never been tested and
> I’d be surprised if it worked.
>
> Regards,
> Florin
>
> > On May 20, 2022, at 2:18 AM, NUAA无痕  wrote:
> >
> > hi, vpp expert
> > im use vpp hoststack for java netty
> > but it segmentfault, reason is epoll use svm_fifo_t is null
> > can u give some suggestion,thank u
> >
> >
> > 
> >
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21452): https://lists.fd.io/g/vpp-dev/message/21452
Mute This Topic: https://lists.fd.io/mt/91227238/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] hoststack-udp problems

2022-05-20 Thread NUAA
hi,vpp expert
now im use vpp hoststack for udp, i meet some problems

1.udp socket must use connect function, if user sendto will cause ip
address not connect error

2.if i use udp socket send packet biger than 1500, udp will split many
packet, is some method let it dont split

3.startup.conf set udp { mtu 9000 }, then use hoststack send one packet, it
will always send packet and cannot stop, the mtu must less than 1500

can u give some suggestions? than u

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21421): https://lists.fd.io/g/vpp-dev/message/21421
Mute This Topic: https://lists.fd.io/mt/91227351/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] hoststack-java netty segmentfault

2022-05-20 Thread NUAA
hi, vpp expert
im use vpp hoststack for java netty
but it segmentfault, reason is epoll use svm_fifo_t is null
can u give some suggestion,thank u

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21420): https://lists.fd.io/g/vpp-dev/message/21420
Mute This Topic: https://lists.fd.io/mt/91227238/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-