[dpdk-dev] KNI with multiple kthreads per port

2015-03-05 Thread Zhang, Helin


> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of JP M.
> Sent: Sunday, March 1, 2015 6:40 AM
> To: dev at dpdk.org
> Subject: [dpdk-dev] KNI with multiple kthreads per port
> 
> Howdy! First time posting; please be gentle. :-)
> 
> Environment:
>  * DPDK 1.8.0 release
>  * Linux kernel 3.0.3x-ish
>  * 32-bit (yes, KNI works fine, after a few tweaks hugepage init strategy)
Interesting! How did you get it works?

> 
> I'm trying to use the KNI example app with a configuration where multiple
> kthreads are created for a physical port. Per the user guide and code, the 
> first
> such kthread is the "master", any the only one configurable; I'll refer to the
> additional kthread(s) as "slaves", although their relationship to the master
> kthread isn't discussed anywhere that I've looked thus far.
> 
> # insmod rte_kni.ko kthread_mode=multiple # kni [] --config="(0,0,1,2,3)"
> # ifconfig vEth0_0 10.0.0.1 netmask 255.255.255.0
> 
> From the above: PMD-bound physical port0. Rx/Tx on cores 0 and 1,
> respectively. Master thread on core 2, one slave kthread on core 3.  Upon
> startup, KNI devices vEth0_0 (master) and vEth0_1 (slave) are created.
> After ifconfig, vEth0_0 works fine; by design, vEth0_1 cannot be configured.
What do you mean "vEth0_1 cannot be configured"?

> 
> The problem I'm encountering is that the subset of packets hitting vEth0_1 are
> being dropped... somewhere.  They're definitely getting as far as the call to
> netif_rx(skb).  I'll try on a newer system for comparison.  But before I go 
> too
> much further, I'd like to establish the correct set-up and expectations.
So you can check the receiving side in KNI kernel function.

> 
> Should I be bonding vEth0_0 and vEth0_1?  Because I tried doing so (via 
> sysfs);
> however, attempts to add either as slaves to bond0 were ignored.
What do you mean bonding here? Basically KNI has no relationship to bonding.

> 
> Any ideas appreciated. (Though it may end up being a moot point, with the
> other work this past week on KNI performance.)


[dpdk-dev] KNI with multiple kthreads per port

2015-03-04 Thread JP M.
On Wed, Mar 4, 2015 at 9:40 PM, Zhang, Helin  wrote:
>>  * 32-bit (yes, KNI works fine, after a few tweaks hugepage init strategy)
> Interesting! How did you get it works?

In a nutshell: The current (circa r1.8.0) approach does mmap starting
from the bottom of the address space, then does a second round of
mmap, then unmaps the first round. Result is (AFAICT) unnecessarily
wasting much of the address space. I'll try to get my patch cleaned up
and posted in the next several days.

> What do you mean "vEth0_1 cannot be configured"?

The primary KNI device, vEth0_0, has (I forget which) ops defined, but
any secndary KNI devices do not. That said, I now see that once the
latter are brought up, I can set their MAC address. Which turns out to
be important. (see below)

>> Should I be bonding vEth0_0 and vEth0_1?  Because I tried doing so (via 
>> sysfs);
>> however, attempts to add either as slaves to bond0 were ignored.
>
> What do you mean bonding here? Basically KNI has no relationship to bonding.

All the same, I figured out what I was doing wrong (user-error on my
part, I think) with regards to bonding (EtherChannel) and am now able
to get multiple vnics to enslave. The catch was that the secondary KNI
devices, after being enslaved, are assigned a random MAC. After doing
so, it is necessary to then manually set their MAC to that of the
primary KNI device.  Thereafter, Rx/Tx are load-balanced as expected.
That assignment of a random MAC to the secondary KNI devices is the
hang-up; not clear whether there's a deliberate reason for that
happening.

 ~jp


[dpdk-dev] KNI with multiple kthreads per port

2015-02-28 Thread JP M.
Howdy! First time posting; please be gentle. :-)

Environment:
 * DPDK 1.8.0 release
 * Linux kernel 3.0.3x-ish
 * 32-bit (yes, KNI works fine, after a few tweaks hugepage init strategy)

I'm trying to use the KNI example app with a configuration where multiple
kthreads are created for a physical port. Per the user guide and code, the
first such kthread is the "master", any the only one configurable; I'll
refer to the additional kthread(s) as "slaves", although their relationship
to the master kthread isn't discussed anywhere that I've looked thus far.

# insmod rte_kni.ko kthread_mode=multiple
# kni [] --config="(0,0,1,2,3)"
# ifconfig vEth0_0 10.0.0.1 netmask 255.255.255.0

>From the above: PMD-bound physical port0. Rx/Tx on cores 0 and 1,
respectively. Master thread on core 2, one slave kthread on core 3.  Upon
startup, KNI devices vEth0_0 (master) and vEth0_1 (slave) are created.
After ifconfig, vEth0_0 works fine; by design, vEth0_1 cannot be configured.

The problem I'm encountering is that the subset of packets hitting vEth0_1
are being dropped... somewhere.  They're definitely getting as far as the
call to netif_rx(skb).  I'll try on a newer system for comparison.  But
before I go too much further, I'd like to establish the correct set-up and
expectations.

Should I be bonding vEth0_0 and vEth0_1?  Because I tried doing so (via
sysfs); however, attempts to add either as slaves to bond0 were ignored.

Any ideas appreciated. (Though it may end up being a moot point, with the
other work this past week on KNI performance.)