these logs which I add in the Kernel-code were printed and do not see
any other Kernel-logs.
Thanks,
Madhuker.
On Wed, Nov 15, 2023 at 8:49 PM Stephen Hemminger <
step...@networkplumber.org> wrote:
> On Wed, 15 Nov 2023 15:38:55 +0530
> madhukar mythri wrote:
>
> > Hi all,
&
Hi all,
On the RHEL9.2 with DPDK 22.11.1 version, DPDK primary application failed
to add RSS flow on TAP sub-device, when loading the TAP BPF byte-code
instructions.
This "struct bpf_insn l3_l4_hash_insns[]" array(from file:
drivers/net/tap/tap_bpf_insns.h) is in eBPF bytecode instructions
rds,
Madhukar.
On Thu, May 18, 2023 at 5:34 PM Nishant Verma wrote:
> Hi Madhukar,
>
> Can you please elaborate what issue you found in TX-side? Any solution for
> that?
> For me it seems to be both rx and tx.
>
>
> Thanks.
>
> Regards,
> Nishant Verma
>
>
Hi,
We are facing an issue at the Transmit side randomly after 8 to 10 days of
network traffic flow on the Intel X710 10G NIC with i40e PMD.
We found the issue is at Transmit side Tx-queue, as the Tx packets were not
going out and also observed that the Tx-queue stats were not incrementing,
even
Hi,
On our DPDK primary application (based of MLX5 PMD/Device), facing issue in
receiving the IPv6 "Neighbor-Solicitation" packets which is based on
Multi-cast address.
Our DPDK primrary application is configured in Flow-isolation mode, by
adding the Unicast, Multi-cast and Broad-cast MAC address
Hi,
On the Hyper-V(Azure) environment with/without Accelerated mode, we are
seeing segmentation-fault issues while running the secondary process of
DPDK with legacy syntax of arguments on DPDK-21.11 version.
Whereas the same arguments were working well till the DPDK-21.08 version,
only with the
Hi,
Make-sure the Kernel drivers(mlx5) were loaded properly on the Mellonox
devices.
In DPDK-19.11, it works well, try with PCI domain and '-n' option as
follows:
./bin/testpmd -l 10-12 -n 1 -w :82:00.0 --
Regards,
Madhukar.
On Thu, Jan 27, 2022 at 1:46 PM Sindhura Bandi <
.org> wrote:
>
>> On Thu, 27 May 2021 15:40:57 +
>> Raslan Darawsheh wrote:
>>
>> > Hi,
>> >
>> > > -Original Message-
>> > > From: users On Behalf Of madhukar mythri
>> > > Sent: Thursday, May 27, 2021
Hi,
We are facing issue with UDP/IP based fragmented packets on Azure cloud
platform with Accelerated-Network enabled ports.
UDP fragmented Rx packets were able to receive well on media ports. But,
when two fragmented packet received, first fragment is received on Queue-0
and second fragment is
Hi,
When we tried to launch a DPDK app on Azure VM with 2-queues,
seeing following errors and thus we are not able to receive any traffic
on these NIC ports(MLX5).
On Azure VM, using "net_failsafe" PMD.
===
PORT 0 Max supports 16 rx queues and 16 tx queues (driver_name =
r does not
> support msix. We have plans to upstream vmxnet3 msix support in the future.
>
> Yong
>
> -Original Message-
> From: Thomas Monjalon
> Date: Thursday, January 14, 2021 at 10:50 AM
> To: madhukar mythri
> Cc: "users@dpdk.org" , Yong Wang
>
Hi,
Does vmxnet3 PMD support LSC=1(i.e with interrupt mode) for link changes ?
When i enable LSC=1 the functionality works fine, but, when pumping traffic
i'm seeing increasing in CPU load on some cores which is running
"eal-intr-thread" epoll_wait() function for more CPU-time.
Actually,
It worked well with "igb_uio" on both X710 and 82599 NIC's with SR-IOV
enabled. I had tested with DPDK-18.11 version.
Have you enabled the "intel_iommu=on" in host Kernel boot parameters ?
On Sat, Feb 1, 2020 at 10:50 PM Suchetha p wrote:
> Hi,
>
> We are trying to bring up KVM based VMs on HP
st, in addition to the vlan
> id and mac, does it need to set the IP of source NIC and dst NIC?
>
> Thanks,
> Derek
>
> madhukar mythri 於 2019年9月18日 週三 下午5:00寫道:
>
>> Did you configured the testpmd with "io" forwarding mode?
>> you can cross-check with this
Hi,
When i ran the 'testpmd' using the latest DPDK-19.08(or even with
DPDK-18.11.2), we are seeing only single Rx-queue available on the bnxt VF
device.
Does any workaround/fix available to increase upto two queues.
The bnxt device firmware version is: 1.8.4.
Here is the output of the 'testpmd':
Hi,
When i run the sample program "hello_world" using the DPDK - 18.11
version on Braodcom BNXT driver the crash is seen:
[root@localhost build]# ./helloworld
EAL: Detected 48 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
You can go-through this OVS+DPDK link on Performance tunning details:
http://docs.openvswitch.org/en/latest/intro/install/dpdk/
In which, you can goto "Performance-Tunning" section which explains about
the CPU isolation (makesure, you have done Core to Rx-queue binding) and
Memory allocated as per
gt; on its own with no required intervention from the hypervisor.
>
>
> ------
> *From:* users on behalf of madhukar mythri <
> madhukar.myt...@gmail.com>
> *Sent:* Monday, April 1, 2019 7:21:11 AM
> *To:* users@dpdk.org
> *Subject:* [dpdk-users]
visor.
>
>
> ------
> *From:* users on behalf of madhukar mythri <
> madhukar.myt...@gmail.com>
> *Sent:* Monday, April 1, 2019 7:21:11 AM
> *To:* users@dpdk.org
> *Subject:* [dpdk-users] Support for RSS UDP hash types on vmxnet3
>
> Hi,
>
&
Hi,
As per the documentation of "Poll Mode Driver for Paravirtual vmxnet3 NIC"
in NIC driver guides.
vmxnet3 Features and Limitations of VMXNET3 PMD are as follows:
---
RSS based load balancing between queues - SUPPORTED
But, i'm seeing all the UDP
20 matches
Mail list logo