Revise VMDq and VMDq/DCB Forwarding sample documentation for clarity,
accuracy, and compliance with technical writing standards.

Common changes to both files:
- Add technology overview sections explaining VMDq hardware packet sorting
- Fix contradictory statements about command-line options
- Create dedicated Command-Line Options sections
- Add Supported Configurations sections for hardware details
- Improve sentence structure and readability
- Fix RST formatting issues
- Convert warnings to RST note directives

vmdq_forwarding.rst:
- Update application name from vmdq_app to dpdk-vmdq

vmdq_dcb_forwarding.rst:
- Add DCB/QoS explanation using VLAN user priority fields
- Correct typo: "VMD queues" -> "VMDq queues"
- Correct capitalization: "linux" -> "Linux"
- Add sub-headings for traffic class and MAC address sections

The technology context is based on Intel's VMDq Technology paper and
helps readers understand hardware-based packet classification benefits
in virtualized environments.

Signed-off-by: Stephen Hemminger <[email protected]>
---
 .../sample_app_ug/vmdq_dcb_forwarding.rst     | 193 +++++++++++-------
 doc/guides/sample_app_ug/vmdq_forwarding.rst  | 144 +++++++------
 2 files changed, 207 insertions(+), 130 deletions(-)

diff --git a/doc/guides/sample_app_ug/vmdq_dcb_forwarding.rst 
b/doc/guides/sample_app_ug/vmdq_dcb_forwarding.rst
index efb133c11c..9d01901f0c 100644
--- a/doc/guides/sample_app_ug/vmdq_dcb_forwarding.rst
+++ b/doc/guides/sample_app_ug/vmdq_dcb_forwarding.rst
@@ -1,150 +1,197 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
     Copyright(c) 2010-2014 Intel Corporation.
 
-VMDQ and DCB Forwarding Sample Application
+VMDq and DCB Forwarding Sample Application
 ==========================================
 
-The VMDQ and DCB Forwarding sample application is a simple example of packet 
processing using the DPDK.
-The application performs L2 forwarding using VMDQ and DCB to divide the 
incoming traffic into queues.
-The traffic splitting is performed in hardware by the VMDQ and DCB features of 
the Intel® 82599 and X710/XL710 Ethernet Controllers.
+The VMDq and DCB Forwarding sample application demonstrates packet processing 
using the DPDK.
+The application performs L2 forwarding using Intel VMDq (Virtual Machine 
Device Queues) combined
+with DCB (Data Center Bridging) to divide incoming traffic into queues. The 
traffic splitting
+is performed in hardware by the VMDq and DCB features of Intel 82599 and 
X710/XL710
+Ethernet Controllers.
 
 Overview
 --------
 
-This sample application can be used as a starting point for developing a new 
application that is based on the DPDK and
-uses VMDQ and DCB for traffic partitioning.
+This sample application can serve as a starting point for developing DPDK 
applications
+that use VMDq and DCB for traffic partitioning.
 
-The VMDQ and DCB filters work on MAC and VLAN traffic to divide the traffic 
into input queues on the basis of the Destination MAC
-address, VLAN ID and VLAN user priority fields.
-VMDQ filters split the traffic into 16 or 32 groups based on the Destination 
MAC and VLAN ID.
-Then, DCB places each packet into one of queues within that group, based upon 
the VLAN user priority field.
+About VMDq and DCB Technology
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-All traffic is read from a single incoming port (port 0) and output on port 1, 
without any processing being performed.
-With Intel® 82599 NIC, for example, the traffic is split into 128 queues on 
input, where each thread of the application reads from
-multiple queues. When run with 8 threads, that is, with the -c FF option, each 
thread receives and forwards packets from 16 queues.
+VMDq is a silicon-level technology that offloads network I/O packet sorting 
from the
+Virtual Machine Monitor (VMM) to the network controller hardware. This reduces 
CPU
+overhead in virtualized environments by performing Layer 2 classification in 
hardware.
 
-As supplied, the sample application configures the VMDQ feature to have 32 
pools with 4 queues each as indicated in :numref:`figure_vmdq_dcb_example`.
-The Intel® 82599 10 Gigabit Ethernet Controller NIC also supports the 
splitting of traffic into 16 pools of 8 queues. While the
-Intel® X710 or XL710 Ethernet Controller NICs support many configurations of 
VMDQ pools of 4 or 8 queues each. For simplicity, only 16
-or 32 pools is supported in this sample. And queues numbers for each VMDQ pool 
can be changed by setting RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM
-in config/rte_config.h file.
-The nb-pools, nb-tcs and enable-rss parameters can be passed on the command 
line, after the EAL parameters:
+DCB (Data Center Bridging) extends VMDq by adding Quality of Service (QoS) 
support.
+DCB uses the VLAN user priority field (also called Priority Code Point or PCP) 
to
+classify packets into different traffic classes, enabling bandwidth allocation 
and
+priority-based queuing.
 
-.. code-block:: console
+How VMDq and DCB Filtering Works
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The VMDq and DCB filters work together on MAC and VLAN traffic to divide 
packets into
+input queues:
+
+1. **VMDq filtering**: Splits traffic into 16 or 32 groups based on the 
destination
+   MAC address and VLAN ID.
+
+2. **DCB classification**: Places each packet into one of the queues within 
its VMDq
+   group based on the VLAN user priority field.
 
-    ./<build_dir>/examples/dpdk-vmdq_dcb [EAL options] -- -p PORTMASK 
--nb-pools NP --nb-tcs TC --enable-rss
+All traffic is read from a single incoming port (port 0) and output on port 1 
without
+modification. For the Intel 82599 NIC, traffic is split into 128 queues on 
input.
+Each application thread reads from multiple queues. When running with 8 threads
+(using the ``-c FF`` option), each thread receives and forwards packets from 
16 queues.
 
-where, NP can be 16 or 32, TC can be 4 or 8, rss is disabled by default.
+:numref:`figure_vmdq_dcb_example` illustrates the packet flow through the 
application.
 
 .. _figure_vmdq_dcb_example:
 
 .. figure:: img/vmdq_dcb_example.*
 
-   Packet Flow Through the VMDQ and DCB Sample Application
+   Packet Flow Through the VMDq and DCB Sample Application
 
+Supported Configurations
+~~~~~~~~~~~~~~~~~~~~~~~~
 
-In Linux* user space, the application can display statistics with the number 
of packets received on each queue.
-To have the application display the statistics, send a SIGHUP signal to the 
running application process.
+The sample application supports the following configurations:
 
-The VMDQ and DCB Forwarding sample application is in many ways simpler than 
the L2 Forwarding application
-(see :doc:`l2_forward_real_virtual`)
-as it performs unidirectional L2 forwarding of packets from one port to a 
second port.
-No command-line options are taken by this application apart from the standard 
EAL command-line options.
+- **Intel 82599 10 Gigabit Ethernet Controller**: 32 pools with 4 queues each 
(default),
+  or 16 pools with 8 queues each.
+
+- **Intel X710/XL710 Ethernet Controllers**: Multiple configurations of VMDq 
pools
+  with 4 or 8 queues each. For simplicity, this sample supports only 16 or 32 
pools.
+  The number of queues per VMDq pool can be changed by setting
+  ``RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM`` in ``config/rte_config.h``.
 
 .. note::
 
-    Since VMD queues are being used for VMM, this application works correctly
-    when VTd is disabled in the BIOS or Linux* kernel (intel_iommu=off).
+    Since VMDq queues are used for virtual machine management, this 
application works
+    correctly when VT-d is disabled in the BIOS or Linux kernel 
(``intel_iommu=off``).
 
 Compiling the Application
 -------------------------
 
-
-
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
 
 The application is located in the ``vmdq_dcb`` sub-directory.
 
 Running the Application
 -----------------------
 
-To run the example in a linux environment:
+To run the example in a Linux environment:
+
+.. code-block:: console
+
+    ./<build_dir>/examples/dpdk-vmdq_dcb -l 0-3 -- -p 0x3 --nb-pools 32 
--nb-tcs 4
+
+Command-Line Options
+~~~~~~~~~~~~~~~~~~~~
+
+The following application-specific options are available after the EAL 
parameters:
+
+``-p PORTMASK``
+    Hexadecimal bitmask of ports to configure.
+
+``--nb-pools NP``
+    Number of VMDq pools. Valid values are 16 or 32.
+
+``--nb-tcs TC``
+    Number of traffic classes. Valid values are 4 or 8.
+
+``--enable-rss``
+    Enable Receive Side Scaling. RSS is disabled by default.
+
+Example:
 
 .. code-block:: console
 
-    user@target:~$ ./<build_dir>/examples/dpdk-vmdq_dcb -l 0-3 -- -p 0x3 
--nb-pools 32 --nb-tcs 4
+    ./<build_dir>/examples/dpdk-vmdq_dcb [EAL options] -- -p 0x3 --nb-pools 32 
--nb-tcs 4 --enable-rss
 
-Refer to the *DPDK Getting Started Guide* for general information on running 
applications and
-the Environment Abstraction Layer (EAL) options.
+Refer to the *DPDK Getting Started Guide* for general information on running 
applications
+and the Environment Abstraction Layer (EAL) options.
 
 Explanation
 -----------
 
-The following sections provide some explanation of the code.
+The following sections explain the code structure.
 
 Initialization
 ~~~~~~~~~~~~~~
 
-The EAL, driver and PCI configuration is performed largely as in the L2 
Forwarding sample application,
-as is the creation of the mbuf pool.
-See :doc:`l2_forward_real_virtual`.
-Where this example application differs is in the configuration of the NIC port 
for RX.
+The EAL, driver, and PCI configuration is performed similarly to the L2 
Forwarding sample
+application, as is the creation of the mbuf pool. See 
:doc:`l2_forward_real_virtual` for details.
+
+This example application differs in the configuration of the NIC port for RX. 
The VMDq and
+DCB hardware features are configured at port initialization time by setting 
appropriate values
+in the ``rte_eth_conf`` structure passed to the ``rte_eth_dev_configure()`` 
API.
 
-The VMDQ and DCB hardware feature is configured at port initialization time by 
setting the appropriate values in the
-rte_eth_conf structure passed to the rte_eth_dev_configure() API.
-Initially in the application,
-a default structure is provided for VMDQ and DCB configuration to be filled in 
later by the application.
+Initially, the application provides a default structure for VMDq and DCB 
configuration:
 
 .. literalinclude:: ../../../examples/vmdq_dcb/main.c
     :language: c
     :start-after: Empty vmdq+dcb configuration structure. Filled in 
programmatically. 8<
     :end-before: >8 End of empty vmdq+dcb configuration structure.
 
-The get_eth_conf() function fills in an rte_eth_conf structure with the 
appropriate values,
-based on the global vlan_tags array,
-and dividing up the possible user priority values equally among the individual 
queues
-(also referred to as traffic classes) within each pool. With Intel® 82599 NIC,
-if the number of pools is 32, then the user priority fields are allocated 2 to 
a queue.
-If 16 pools are used, then each of the 8 user priority fields is allocated to 
its own queue within the pool.
-With Intel® X710/XL710 NICs, if number of tcs is 4, and number of queues in 
pool is 8,
-then the user priority fields are allocated 2 to one tc, and a tc has 2 queues 
mapping to it, then
-RSS will determine the destination queue in 2.
-For the VLAN IDs, each one can be allocated to possibly multiple pools of 
queues,
-so the pools parameter in the rte_eth_vmdq_dcb_conf structure is specified as 
a bitmask value.
-For destination MAC, each VMDQ pool will be assigned with a MAC address. In 
this sample, each VMDQ pool
-is assigned to the MAC like 52:54:00:12:<port_id>:<pool_id>, that is,
-the MAC of VMDQ pool 2 on port 1 is 52:54:00:12:01:02.
+Traffic Class and Queue Assignment
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``get_eth_conf()`` function fills in the ``rte_eth_conf`` structure with 
appropriate
+values based on the global ``vlan_tags`` array. The function divides user 
priority values
+among individual queues (traffic classes) within each pool.
+
+For Intel 82599 NICs:
+
+- With 32 pools: User priority fields are allocated 2 per queue.
+- With 16 pools: Each of the 8 user priority fields is allocated to its own 
queue.
+
+For Intel X710/XL710 NICs:
+
+- With 4 traffic classes and 8 queues per pool: User priority fields are 
allocated
+  2 per traffic class, with 2 queues mapped to each traffic class. RSS 
determines
+  the destination queue within each traffic class.
+
+For VLAN IDs, each ID can be allocated to multiple pools of queues, so the 
``pools``
+parameter in the ``rte_eth_vmdq_dcb_conf`` structure is specified as a bitmask.
 
 .. literalinclude:: ../../../examples/vmdq_dcb/main.c
     :language: c
     :start-after: Dividing up the possible user priority values. 8<
     :end-before: >8 End of dividing up the possible user priority values.
 
+MAC Address Assignment
+^^^^^^^^^^^^^^^^^^^^^^
+
+Each VMDq pool is assigned a MAC address using the format 
``52:54:00:12:<port_id>:<pool_id>``.
+For example, VMDq pool 2 on port 1 uses the MAC address ``52:54:00:12:01:02``.
+
 .. literalinclude:: ../../../examples/vmdq_dcb/main.c
     :language: c
     :start-after: Set mac for each pool. 8<
     :end-before: >8 End of set mac for each pool.
     :dedent: 1
 
-Once the network port has been initialized using the correct VMDQ and DCB 
values,
-the initialization of the port's RX and TX hardware rings is performed 
similarly to that
-in the L2 Forwarding sample application.
+After the network port is initialized with VMDq and DCB values, the port's RX 
and TX
+hardware rings are initialized similarly to the L2 Forwarding sample 
application.
 See :doc:`l2_forward_real_virtual` for more information.
 
 Statistics Display
 ~~~~~~~~~~~~~~~~~~
 
-When run in a linux environment,
-the VMDQ and DCB Forwarding sample application can display statistics showing 
the number of packets read from each RX queue.
-This is provided by way of a signal handler for the SIGHUP signal,
-which simply prints to standard output the packet counts in grid form.
-Each row of the output is a single pool with the columns being the queue 
number within that pool.
+When running in a Linux environment, the application can display statistics 
showing the
+number of packets read from each RX queue. The application uses a signal 
handler for the
+SIGHUP signal that prints packet counts in grid form, with each row 
representing a single
+pool and each column representing a queue number within that pool.
 
-To generate the statistics output, use the following command:
+To generate the statistics output:
 
 .. code-block:: console
 
-    user@host$ sudo killall -HUP vmdq_dcb_app
+    sudo killall -HUP dpdk-vmdq_dcb
+
+.. note::
 
-Please note that the statistics output will appear on the terminal where the 
vmdq_dcb_app is running,
-rather than the terminal from which the HUP signal was sent.
+    The statistics output appears on the terminal where the application is 
running,
+    not on the terminal from which the HUP signal was sent.
diff --git a/doc/guides/sample_app_ug/vmdq_forwarding.rst 
b/doc/guides/sample_app_ug/vmdq_forwarding.rst
index c998a5a223..f100d965cd 100644
--- a/doc/guides/sample_app_ug/vmdq_forwarding.rst
+++ b/doc/guides/sample_app_ug/vmdq_forwarding.rst
@@ -2,50 +2,60 @@
     Copyright(c) 2020 Intel Corporation.
 
 VMDq Forwarding Sample Application
-==========================================
+==================================
 
-The VMDq Forwarding sample application is a simple example of packet 
processing using the DPDK.
-The application performs L2 forwarding using VMDq to divide the incoming 
traffic into queues.
-The traffic splitting is performed in hardware by the VMDq feature of the 
Intel® 82599 and X710/XL710 Ethernet Controllers.
+The VMDq Forwarding sample application demonstrates packet processing using 
the DPDK.
+The application performs L2 forwarding using Intel VMDq (Virtual Machine 
Device Queues)
+to divide incoming traffic into queues. The traffic splitting is performed in 
hardware
+by the VMDq feature of Intel 82599 and X710/XL710 Ethernet Controllers.
 
 Overview
 --------
 
-This sample application can be used as a starting point for developing a new 
application that is based on the DPDK and
-uses VMDq for traffic partitioning.
+This sample application can serve as a starting point for developing DPDK 
applications
+that use VMDq for traffic partitioning.
 
-VMDq filters split the incoming packets up into different "pools" - each with 
its own set of RX queues - based upon
-the MAC address and VLAN ID within the VLAN tag of the packet.
+About VMDq Technology
+~~~~~~~~~~~~~~~~~~~~~
 
-All traffic is read from a single incoming port and output on another port, 
without any processing being performed.
-With Intel® 82599 NIC, for example, the traffic is split into 128 queues on 
input, where each thread of the application reads from
-multiple queues. When run with 8 threads, that is, with the -c FF option, each 
thread receives and forwards packets from 16 queues.
+VMDq is a silicon-level technology designed to improve network I/O performance 
in
+virtualized environments. In traditional virtualized systems, the Virtual 
Machine Monitor
+(VMM) must sort incoming packets and route them to the correct virtual 
machine, consuming
+significant CPU cycles. VMDq offloads this packet sorting to the network 
controller hardware,
+freeing CPU resources for application workloads.
 
-As supplied, the sample application configures the VMDq feature to have 32 
pools with 4 queues each.
-The Intel® 82599 10 Gigabit Ethernet Controller NIC also supports the 
splitting of traffic into 16 pools of 2 queues.
-While the Intel® X710 or XL710 Ethernet Controller NICs support many 
configurations of VMDq pools of 4 or 8 queues each.
-And queues numbers for each VMDq pool can be changed by setting 
RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM
-in config/rte_config.h file.
-The nb-pools and enable-rss parameters can be passed on the command line, 
after the EAL parameters:
+When packets arrive at a VMDq-enabled network adapter, a Layer 2 classifier in 
the controller
+sorts packets based on MAC addresses and VLAN tags, then places each packet in 
the receive
+queue assigned to the appropriate destination. This hardware-based pre-sorting 
reduces the
+overhead of software-based virtual switches.
 
-.. code-block:: console
+How VMDq Filtering Works
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+VMDq filters split incoming packets into different pools, each with its own 
set of RX queues,
+based on the MAC address and VLAN ID within the VLAN tag of the packet.
 
-    ./<build_dir>/examples/dpdk-vmdq [EAL options] -- -p PORTMASK --nb-pools 
NP --enable-rss
+All traffic is read from a single incoming port and output on another port 
without modification.
+For the Intel 82599 NIC, traffic is split into 128 queues on input. Each 
application thread
+reads from multiple queues. When running with 8 threads (using the ``-c FF`` 
option), each
+thread receives and forwards packets from 16 queues.
 
-where, NP can be 8, 16 or 32, rss is disabled by default.
+Supported Configurations
+~~~~~~~~~~~~~~~~~~~~~~~~
 
-In Linux* user space, the application can display statistics with the number 
of packets received on each queue.
-To have the application display the statistics, send a SIGHUP signal to the 
running application process.
+The sample application supports the following configurations:
 
-The VMDq Forwarding sample application is in many ways simpler than the L2 
Forwarding application
-(see :doc:`l2_forward_real_virtual`)
-as it performs unidirectional L2 forwarding of packets from one port to a 
second port.
-No command-line options are taken by this application apart from the standard 
EAL command-line options.
+- **Intel 82599 10 Gigabit Ethernet Controller**: 32 pools with 4 queues each 
(default),
+  or 16 pools with 2 queues each.
+
+- **Intel X710/XL710 Ethernet Controllers**: Multiple configurations of VMDq 
pools
+  with 4 or 8 queues each. The number of queues per VMDq pool can be changed 
by setting
+  ``RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM`` in ``config/rte_config.h``.
 
 Compiling the Application
 -------------------------
 
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
 
 The application is located in the ``vmdq`` sub-directory.
 
@@ -56,40 +66,60 @@ To run the example in a Linux environment:
 
 .. code-block:: console
 
-    user@target:~$ ./<build_dir>/examples/dpdk-vmdq -l 0-3 -- -p 0x3 
--nb-pools 16
+    ./<build_dir>/examples/dpdk-vmdq -l 0-3 -- -p 0x3 --nb-pools 16
+
+Command-Line Options
+~~~~~~~~~~~~~~~~~~~~
+
+The following application-specific options are available after the EAL 
parameters:
+
+``-p PORTMASK``
+    Hexadecimal bitmask of ports to configure.
+
+``--nb-pools NP``
+    Number of VMDq pools. Valid values are 8, 16, or 32.
+
+``--enable-rss``
+    Enable Receive Side Scaling. RSS is disabled by default.
 
-Refer to the *DPDK Getting Started Guide* for general information on running 
applications and
-the Environment Abstraction Layer (EAL) options.
+Example:
+
+.. code-block:: console
+
+    ./<build_dir>/examples/dpdk-vmdq [EAL options] -- -p 0x3 --nb-pools 32 
--enable-rss
+
+Refer to the *DPDK Getting Started Guide* for general information on running 
applications
+and the Environment Abstraction Layer (EAL) options.
 
 Explanation
 -----------
 
-The following sections provide some explanation of the code.
+The following sections explain the code structure.
 
 Initialization
 ~~~~~~~~~~~~~~
 
-The EAL, driver and PCI configuration is performed largely as in the L2 
Forwarding sample application,
-as is the creation of the mbuf pool.
-See :doc:`l2_forward_real_virtual`.
-Where this example application differs is in the configuration of the NIC port 
for RX.
+The EAL, driver, and PCI configuration is performed similarly to the L2 
Forwarding sample
+application, as is the creation of the mbuf pool. See 
:doc:`l2_forward_real_virtual` for details.
 
-The VMDq hardware feature is configured at port initialization time by setting 
the appropriate values in the
-rte_eth_conf structure passed to the rte_eth_dev_configure() API.
-Initially in the application,
-a default structure is provided for VMDq configuration to be filled in later 
by the application.
+This example application differs in the configuration of the NIC port for RX. 
The VMDq
+hardware feature is configured at port initialization time by setting 
appropriate values
+in the ``rte_eth_conf`` structure passed to the ``rte_eth_dev_configure()`` 
API.
+
+Initially, the application provides a default structure for VMDq configuration:
 
 .. literalinclude:: ../../../examples/vmdq/main.c
     :language: c
     :start-after: Default structure for VMDq. 8<
     :end-before: >8 End of Empty vdmq configuration structure.
 
-The get_eth_conf() function fills in an rte_eth_conf structure with the 
appropriate values,
-based on the global vlan_tags array.
-For the VLAN IDs, each one can be allocated to possibly multiple pools of 
queues.
-For destination MAC, each VMDq pool will be assigned with a MAC address. In 
this sample, each VMDq pool
-is assigned to the MAC like 52:54:00:12:<port_id>:<pool_id>, that is,
-the MAC of VMDq pool 2 on port 1 is 52:54:00:12:01:02.
+The ``get_eth_conf()`` function fills in the ``rte_eth_conf`` structure with 
appropriate
+values based on the global ``vlan_tags`` array. Each VLAN ID can be allocated 
to multiple
+pools of queues.
+
+For destination MAC addresses, each VMDq pool is assigned a MAC address using 
the format
+``52:54:00:12:<port_id>:<pool_id>``. For example, VMDq pool 2 on port 1 uses 
the MAC address
+``52:54:00:12:01:02``.
 
 .. literalinclude:: ../../../examples/vmdq/main.c
     :language: c
@@ -106,25 +136,25 @@ the MAC of VMDq pool 2 on port 1 is 52:54:00:12:01:02.
     :start-after: Building correct configuration for vdmq. 8<
     :end-before: >8 End of get_eth_conf.
 
-Once the network port has been initialized using the correct VMDq values,
-the initialization of the port's RX and TX hardware rings is performed 
similarly to that
-in the L2 Forwarding sample application.
+After the network port is initialized with VMDq values, the port's RX and TX 
hardware rings
+are initialized similarly to the L2 Forwarding sample application.
 See :doc:`l2_forward_real_virtual` for more information.
 
 Statistics Display
 ~~~~~~~~~~~~~~~~~~
 
-When run in a Linux environment,
-the VMDq Forwarding sample application can display statistics showing the 
number of packets read from each RX queue.
-This is provided by way of a signal handler for the SIGHUP signal,
-which simply prints to standard output the packet counts in grid form.
-Each row of the output is a single pool with the columns being the queue 
number within that pool.
+When running in a Linux environment, the application can display statistics 
showing the
+number of packets read from each RX queue. The application uses a signal 
handler for the
+SIGHUP signal that prints packet counts in grid form, with each row 
representing a single
+pool and each column representing a queue number within that pool.
 
-To generate the statistics output, use the following command:
+To generate the statistics output:
 
 .. code-block:: console
 
-    user@host$ sudo killall -HUP vmdq_app
+    sudo killall -HUP dpdk-vmdq
+
+.. note::
 
-Please note that the statistics output will appear on the terminal where the 
vmdq_app is running,
-rather than the terminal from which the HUP signal was sent.
+    The statistics output appears on the terminal where the application is 
running,
+    not on the terminal from which the HUP signal was sent.
-- 
2.51.0

Reply via email to