On 2015/2/5 20:00, Damjan Marion (damarion) wrote:
> Hi,
>
> I have system with 2 NUMA nodes and 256G RAM total. I noticed that DPDK
> crashes in rte_eal_init()
> when number of available hugepages is around 4 or above.
> Everything works fine with lower values (i.e. 3).
>
> I also
On 2015/2/5 20:00, Damjan Marion (damarion) wrote:
> Hi,
>
> I have system with 2 NUMA nodes and 256G RAM total. I noticed that DPDK
> crashes in rte_eal_init()
> when number of available hugepages is around 4 or above.
> Everything works fine with lower values (i.e. 3).
>
> I also
On 2015/2/4 9:38, Xu, Qian Q wrote:
> 4. Launch the VM1 and VM2 with virtio device, note: you need use qemu
> version>2.1 to enable the vhost-user server's feature. Old qemu such as
> 1.5,1.6 didn't support it.
> Below is my VM1 startup command, for your reference, similar for VM2.
>
Hi,
I used l2fwd to test ixgbe PMD's latency (packet length is 64 bytes)
found an interesting thing that latency is about 22us when tx bits rate is 4M
and latency is 103us when tx bits rate is 5M.
Who can tell me why?Is it a bug?
Thank you very much!
--
Regards,
Haifeng
/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir
/dev/hugepages -m 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0
>
> -Original Message-
> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
> Sent: Friday, February 06, 2015 12:02 PM
> To: Xu, Q
vdev, struct rte_mbuf *m)
{
...
ret = rte_vhost_enqueue_burst(tdev, VIRTIO_RXQ, , 1/*you cant try to
fill with rx_count*/);
..
}
>
> -Original Message-
> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
> Sent: Friday, February 06, 2015 12:02 PM
> To: Xu, Qian
_user/virtio-net-user.c:104:
error: missing initializer
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/virtio-net-user.c:104:
error: (near initialization for ?tmp[0].mapped_address?)
> -Original Message-
> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
> Sent:
On 2015/2/12 13:07, Huawei Xie wrote:
> +
> + /* This is ugly */
> + mapped_size = memory.regions[idx].memory_size +
> + memory.regions[idx].mmap_offset;
> + mapped_address = (uint64_t)(uintptr_t)mmap(NULL,
> +
On 2015/2/12 17:28, Xie, Huawei wrote:
> On 2/12/2015 4:28 PM, Linhaifeng wrote:
>>
>> On 2015/2/12 13:07, Huawei Xie wrote:
>>> +
>>> + /* This is ugly */
>>> + mapped_size = memory.regions[idx].memory_size +
>>>
On 2015/4/24 15:27, Luke Gorrie wrote:
> On 24 April 2015 at 03:01, Linhaifeng wrote:
>
>> If not add memory fence what would happen? Packets loss or interrupt
>> loss?How to test it ?
>>
>
> You should be able to test it like this:
>
> 1. Boot two
On 2015/6/9 21:34, Xie, Huawei wrote:
> On 6/9/2015 4:47 PM, Michael S. Tsirkin wrote:
>> On Tue, Jun 09, 2015 at 03:04:02PM +0800, Linhaifeng wrote:
>>>
>>> On 2015/4/24 15:27, Luke Gorrie wrote:
>>>> On 24 April 2015 at 03:01, Linhaifeng wrote:
>&g
On 2015/6/10 16:30, Luke Gorrie wrote:
> On 9 June 2015 at 10:46, Michael S. Tsirkin wrote:
>
>> By the way, similarly, host side must re-check avail idx after writing
>> used flags. I don't see where snabbswitch does it - is that a bug
>> in snabbswitch?
>
>
> Good question.
>
> Snabb
On 2015/4/23 0:33, Huawei Xie wrote:
> update of used->idx and read of avail->flags could be reordered.
> memory fence should be used to ensure the order, otherwise guest could see a
> stale used->idx value after it toggles the interrupt suppression flag.
>
> Signed-off-by: Huawei Xie
> ---
>
>
> + if (unlikely(alloc_err)) {
> + uint16_t i = entry_success;
> +
> + m->nb_segs = seg_num;
> + for (; i < free_entries; i++)
> + rte_pktmbuf_free(pkts[entry_success]); ->
> rte_pktmbuf_free(pkts[i]);
> + }
> +
>
is
> that after 3 hours, virtio2 can't receive packets, but virtio1 is still
> sending packets, am I right? So mz is like a packet generator to send
> packets, right?
>
>
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Linhaife
On 2015/2/1 18:36, Tetsuya Mukawa wrote:
> This patch should be put on "lib/librte_vhost: vhost-user support"
> patch series written by Xie, Huawei.
>
> There are 2 type of vhost devices. One is cuse, the other is vhost-user.
> So far, one of them we can use. To use the other, DPDK is needed to
On 2015/1/27 17:37, Michael S. Tsirkin wrote:
> On Tue, Jan 27, 2015 at 03:57:13PM +0800, Linhaifeng wrote:
>> Hi,all
>>
>> I use vhost-user to send data to VM at first it cant work well but after
>> many hours VM can not receive data but can send data.
>>
On 2015/1/28 17:51, Xie, Huawei wrote:
>
>
>> -Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Tuesday, January 27, 2015 3:57 PM
>> To: dpd >> dev at dpdk.org; ms >> Michael S. Tsirkin
>> Cc: lilijun;
From: Linhaifeng <haifeng@huawei.com>
When failed to malloc buffer from mempool we just update last_used_idx but
not used->idx so after many times vhost thought have handle all packets
but virtio_net thought vhost have not handle all packets and will not
update avail->idx.
On 2015/3/20 11:54, linhaifeng wrote:
> From: Linhaifeng
>
> When failed to malloc buffer from mempool we just update last_used_idx but
> not used->idx so after many times vhost thought have handle all packets
> but virtio_net thought vhost have not handle all packets and
From: Linhaifeng <haifeng@huawei.com>
When failed to malloc buffer from mempool we just update last_used_idx but
not used->idx so after many times vhost thought have handle all packets
but virtio_net thought vhost have not handle all packets and will not
update avail->idx.
Sorry for my wrong title. Please ignore it.
On 2015/3/20 17:10, linhaifeng wrote:
> From: Linhaifeng
>
> so we should try to refill when nb_used is 0.After otherone free mbuf
> we can restart to receive packets.
>
> Signed-off-by: Linhaifeng
> ---
> lib/librte_pmd_
On 2015/3/21 0:54, Xie, Huawei wrote:
> On 3/20/2015 6:47 PM, linhaifeng wrote:
>> From: Linhaifeng
>>
>> If failed to alloc mbuf ring_size times the rx_q may be empty and can't
>> receive any packets forever because nb_used is 0 forever.
> Agreed. In current i
From: Linhaifeng <haifeng@huawei.com>
When failed to malloc buffer from mempool we just update last_used_idx but
not used->idx so after many times vhost thought have handle all packets
but virtio_net thought vhost have not handle all packets and will not
update avail->idx.
On 2015/3/21 0:54, Xie, Huawei wrote:
> On 3/20/2015 6:47 PM, linhaifeng wrote:
>> From: Linhaifeng
>>
>> If failed to alloc mbuf ring_size times the rx_q may be empty and can't
>> receive any packets forever because nb_used is 0 forever.
> Agreed. In current i
Hi, changchun & xie
I have modify the path with your suggestions.Please review.
Thank you.
On 2015/3/20 15:28, Ouyang, Changchun wrote:
>
>
>> -Original Message-
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Friday, March 20, 2015 2:36 PM
From: Linhaifeng <haifeng@huawei.com>
Same as rte_vhost_enqueue_burst we should cast used->idx
to volatile before notify guest.
Signed-off-by: Linhaifeng
---
lib/librte_vhost/vhost_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/librte_vhost/vhost
cc changchun.ouyang at intel.com
cc huawei.xie at intel.com
On 2015/3/21 9:47, linhaifeng wrote:
> From: Linhaifeng
>
> When failed to malloc buffer from mempool we just update last_used_idx but
> not used->idx so after many times vhost thought have handle all packets
> but
cc changchun.ouyang at intel.com
cc huawei.xie at intel.com
On 2015/3/21 16:07, linhaifeng wrote:
> From: Linhaifeng
>
> Same as rte_vhost_enqueue_burst we should cast used->idx
> to volatile before notify guest.
>
> Signed-off-by: Linhaifeng
> ---
> lib/librte_vho
On 2015/3/23 20:54, Xie, Huawei wrote:
>
>
>> -Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Monday, March 23, 2015 8:24 PM
>> To: dev at dpdk.org
>> Cc: Ouyang, Changchun; Xie, Huawei
>> Subject: Re: [dp
On 2015/3/24 9:53, Xie, Huawei wrote:
> On 3/24/2015 9:00 AM, Linhaifeng wrote:
>>
>> On 2015/3/23 20:54, Xie, Huawei wrote:
>>>
>>>> -----Original Message-
>>>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>>>> Sent: Mon
On 2015/3/24 15:14, Xie, Huawei wrote:
> On 3/22/2015 8:08 PM, Ouyang, Changchun wrote:
>>
>>> -Original Message-
>>> From: linhaifeng [mailto:haifeng.lin at huawei.com]
>>> Sent: Saturday, March 21, 2015 9:47 AM
>>> To: dev at dpdk.org
>&
On 2015/3/24 18:06, Xie, Huawei wrote:
> On 3/24/2015 3:44 PM, Linhaifeng wrote:
>>
>> On 2015/3/24 9:53, Xie, Huawei wrote:
>>> On 3/24/2015 9:00 AM, Linhaifeng wrote:
>>>> On 2015/3/23 20:54, Xie, Huawei wrote:
>>>>>> -Original Me
On 2015/3/26 15:58, Qiu, Michael wrote:
> On 3/26/2015 3:52 PM, Xie, Huawei wrote:
>> On 3/26/2015 3:05 PM, Qiu, Michael wrote:
>>> Function gpa_to_vva() could return zero, while this will lead
>>> a Segmentation fault.
>>>
>>> This patch is to fix this issue.
>>>
>>> Signed-off-by: Michael Qiu
I have test the VFIO driver and IGB_UIO driver by l2fwd for many times. I find
that the VFIO driver?s performance is not better than the IGB_UIO.
Is something wrong with my test? My test is as follow:
1. bind two 82599 ether to VFIO ./tools/dpdk_nic_bind.py -b vfio-pci
03:00.0 03:00.1
Thank you very much.
My cpu is "Intel(R) Xeon(R) CPU E5620 @ 2.40GHz"
--
???: Vincent JARDIN [mailto:vincent.jardin at 6wind.com]
: 2014?8?8? 15:46
???: Linhaifeng
??: dev at dpdk.org; lixiao (H); Guofeng (E)
??: Re: [dpdk-dev] Is VFIO driver's performa
On 2014/12/11 5:37, Huawei Xie wrote:
> vhost-user support
>
>
> Signed-off-by: Huawei Xie
> ---
> lib/librte_vhost/Makefile | 5 +-
> lib/librte_vhost/vhost-net.h | 4 +
> lib/librte_vhost/vhost_cuse/virtio-net-cdev.c | 9 +
>
On 2014/12/11 5:37, Huawei Xie wrote:
> vhost-user support
>
>
> Signed-off-by: Huawei Xie
> ---
> lib/librte_vhost/Makefile | 5 +-
> lib/librte_vhost/vhost-net.h | 4 +
> lib/librte_vhost/vhost_cuse/virtio-net-cdev.c | 9 +
>
On 2014/12/12 1:13, Xie, Huawei wrote:
>>
>> Only support one vhost-user port ?
>
> Do you mean vhost server by "port"?
> If that is the case, yes, now only one vhost server is supported for multiple
> virtio devices.
> As stated in the cover letter, we have requirement and plan for multiple
On 2014/10/29 9:26, Choonho Son wrote:
> Hi,
>
> After terminating DPDK application, it does not release hugepages.
> Is there any reason for it or to-do item?
>
> Thanks,
> Choonho Son
>
>
I have wrote a patch to release hugepages but haven't send it.
I will send this path later.
--
On 2014/10/29 11:44, Matthew Hall wrote:
> On Wed, Oct 29, 2014 at 03:27:58AM +, Qiu, Michael wrote:
>> I just saw one return path with value '0', and no any other place
>> return a negative value, so it is better to be designed as one
>> non-return function,
>>
>> +void
>>
rte_eal_hugepage_free() is used for unlink all hugepages.If you want to
free all hugepages you must make sure that you have stop to use it,and you
must call this function before exit process.
Signed-off-by: linhaifeng
---
.../lib/librte_eal/common/include/rte_memory.h | 11
.../lib
On 2014/10/29 14:14, Qiu, Michael wrote:
> ? 10/29/2014 1:49 PM, linhaifeng ??:
>> rte_eal_hugepage_free() is used for unlink all hugepages.If you want to
>> free all hugepages you must make sure that you have stop to use it,and you
>> must call this function before exit
On 2014/10/29 13:26, Qiu, Michael wrote:
> ? 10/29/2014 11:46 AM, Matthew Hall ??:
>> On Wed, Oct 29, 2014 at 03:27:58AM +, Qiu, Michael wrote:
>>> I just saw one return path with value '0', and no any other place
>>> return a negative value, so it is better to be designed as one
>>>
On 2014/10/29 16:04, Qiu, Michael wrote:
> 10/29/2014 2:41 PM, Linhaifeng :
>>
>> On 2014/10/29 14:14, Qiu, Michael wrote:
>>> ? 10/29/2014 1:49 PM, linhaifeng ??:
>>>> rte_eal_hugepage_free() is used for unlink all hugepages.If you want to
>>>>
Will dpdk develop a vhost-user lib for the vhost-user backend of qemu?
On 2014/9/12 18:55, Huawei Xie wrote:
> The build of vhost lib requires fuse development package. It is turned off by
> default so as not to break DPDK build.
>
> Signed-off-by: Huawei Xie
> Acked-by: Konstantin Ananyev
>
when will publish ?
On 2014/8/26 19:05, Xie, Huawei wrote:
> Hi all:
> We are implementing qemu official vhost-user interface into DPDK vhost
> library, so there would be two coexisting implementations for user space
> vhost backend.
> Pro and cons in my mind:
> Existing solution:
> Pros:
Hi, all
I'am trying to use valgrind to check memory leak with my dpdk application but
dpdk always failed to mmap hugepages.
Without valgrind it works well.How to run dpdk applications with valgrind?Is
there any other way to check memory leak
with dpdk applications?
On 2015/4/14 4:25, Marc Sune wrote:
>
>
> On 10/04/15 07:53, Linhaifeng wrote:
>> Hi, all
>>
>> I'am trying to use valgrind to check memory leak with my dpdk application
>> but dpdk always failed to mmap hugepages.
>>
>> Without valgri
#define rte_memcpy(dst, src, n) \
((__builtin_constant_p(n)) ? \
memcpy((dst), (src), (n)) : \
rte_memcpy_func((dst), (src), (n)))
Why call memcpy when n is constant variable?
Can i change them to the follow codes?
#define rte_memcpy(dst,
On 2015/1/22 12:45, Matthew Hall wrote:
> One theory. Many DPDK functions crash if they are called before
> rte_eal_init()
> is called. So perhaps this could be a cause, since that won't have been
> called
> when working on a constant
Hi, Matthew
Thank you for your response.
Do you mean
On 2015/1/22 19:34, Bruce Richardson wrote:
> On Thu, Jan 22, 2015 at 07:23:49PM +0900, Tetsuya Mukawa wrote:
>> On 2015/01/22 16:35, Matthew Hall wrote:
>>> On Thu, Jan 22, 2015 at 01:32:04PM +0800, Linhaifeng wrote:
>>>> Do you mean if call rte_memcpy before
On 2015/1/22 23:21, Bruce Richardson wrote:
> This (size_c) is a run-time constant, not a compile-time constant. To trigger
> the
> memcpy optimizations inside the compiler, the size value must be constant at
> compile time.
Hi, Bruce
You are right. When use compile-time constant memcpy is
On 2015/1/23 11:40, Xie, Huawei wrote:
>
>
>> -Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Thursday, December 11, 2014 1:36 PM
>> To: Xie, Huawei; dev at dpdk.org
>> Cc: haifeng.lin at intel.com
>> Subjec
Hi, Xie
could you test vhost-user with follow numa node xml:
2097152
I cann't receive data from VM with above xml.
On 2014/12/11 5:37, Huawei Xie wrote:
> This patchset refines vhost library to support both vhost-cuse and vhost-user.
>
>
> Huawei Xie (12):
>
>>
>> Can you mmap the region if gpa is 0? When i run VM with two numa node (qemu
>> will create two hugepage file) found that always failed to mmap with the
>> region
>> which gpa is 0.
>>
>> BTW can we ensure the memory regions cover with all the memory of hugepage
>> for VM?
>>
> We had
On 2014/12/19 2:07, ciara.loftus at intel.com wrote:
> From: Ciara Loftus
>
> This patch fixes the issue whereby when using userspace vhost ports
> in the context of vSwitching, the name provided to the hypervisor/QEMU
> of the vhost tap device needs to be exposed in the library, in order
Who
Hi,all
I use vhost-user to send data to VM at first it cant work well but after many
hours VM can not receive data but can send data.
(gdb)p avail_idx
$4 = 2668
(gdb)p free_entries
$5 = 0
(gdb)l
/* check that we have enough buffers */
if (unlikely(count > free_entries))
From: Linhaifeng <haifeng@huawei.com>
If we found there is no buffer we should notify virtio_net to
fill buffers.
We use mz send buffers from VM to VM,found that the other VM
stop to receive data after many hours.
Signed-off-by: Linhaifeng
---
lib/librte_vhost/vhost_rxtx
On 2015/1/29 18:39, Xie, Huawei wrote:
>> -if (count == 0)
>> +/* If there is no buffers we should notify guest to fill.
>> +* This is need when guest use virtio_net driver(not pmd).
>> +*/
>> +if (count == 0) {
>> +
On 2015/1/29 21:00, Xie, Huawei wrote:
>
>
>> -Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Thursday, January 29, 2015 8:39 PM
>> To: Xie, Huawei; dev at dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH] vho
On 2015/1/30 0:48, Srinivasreddy R wrote:
> EAL: 512 hugepages of size 2097152 reserved, but no mounted hugetlbfs found
> for that size
Maybe you haven't mount hugetlbfs.
--
Regards,
Haifeng
e right.
>
>
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Linhaifeng
> Sent: Thursday, January 29, 2015 9:51 PM
> To: Xie, Huawei; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there
>
On 2015/1/26 11:20, Huawei Xie wrote:
> In virtnet_send_command:
>
> /* Caller should know better */
> BUG_ON(!virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VQ) ||
> (out + in > VIRTNET_SEND_COMMAND_SG_MAX));
>
> Signed-off-by: Huawei Xie
> ---
>
On 2015/1/30 19:40, zhangsha (A) wrote:
> Hi ?all
>
> I am suffering from the problem mmap failed as followed when init dpdk eal.
>
> Fri Jan 30 09:03:29 2015:EAL: Setting up memory...
> Fri Jan 30 09:03:34 2015:EAL: map_all_hugepages(): mmap failed: Cannot
> allocate memory
> Fri Jan 30
48232 kB
>> Unevictable:3704 kB
>> Mlocked:3704 kB
>> SwapTotal: 16686076 kB
>> SwapFree: 16686076 kB
>> Dirty: 488 kB
>> Writeback: 0 kB
>> AnonPages:230800 kB
>> Mapped: 55248
hi
I use 6 ports to send pkts in VM, but can only 4 ports work, how to enable more
ports to work?
On 2014/11/14 17:08, Wang, Zhihong wrote:
> Hi all,
>
> I'd like to propose an update on DPDK memcpy optimization.
> Please see RFC below for details.
>
>
> Thanks
> John
>
> ---
>
> DPDK Memcpy Optimization
>
> 1. Introduction
> 2. Terminology
> 3. Mechanism
> 3.1 Architectural
? 2016/11/1 18:46, Ferruh Yigit ??:
> Hi Haifeng,
>
> On 10/31/2016 3:52 AM, linhaifeng wrote:
>> From: Haifeng Lin
>>
>> if rx vlan offload is enable we should not handle vlan slow
>> packets too.
>>
>> Signed-off-by: Haifeng Lin
>> ---
>&
Hi,all
please ignore the patch which title is "net/bonding: not handle vlan slow
packet",
I will send another one.
? 2016/11/1 20:32, linhaifeng ??:
> ? 2016/11/1 18:46, Ferruh Yigit ??:
>> Hi Haifeng,
>>
>> On 10/31/2016 3:52 AM, linhaifeng wrote:
>>&g
From: Linhaifeng <haifeng@huawei.com>
We should not drop the slow packets which subtype is
not marker or lacp. Because slow packets have other subtype
like OAM,OSSP,user defined and so on.
Signed-off-by: Linhaifeng
---
drivers/net/bonding/rte_eth_bond_pmd.c | 14 +-
? 2016/10/9 15:27, Yuanhan Liu ??:
> + dev->nr_guest_pages = 0;
> + if (!dev->guest_pages) {
> + dev->max_guest_pages = 8;
> + dev->guest_pages = malloc(dev->max_guest_pages *
> + sizeof(struct guest_page));
> + }
> +
? 2016/10/9 15:27, Yuanhan Liu ??:
> +static void
> +add_guest_pages(struct virtio_net *dev, struct virtio_memory_region *reg,
> + uint64_t page_size)
> +{
> + uint64_t reg_size = reg->size;
> + uint64_t host_user_addr = reg->host_user_addr;
> + uint64_t guest_phys_addr =
? 2016/8/23 16:10, Yuanhan Liu ??:
> The basic idea of Tx zero copy is, instead of copying data from the
> desc buf, here we let the mbuf reference the desc buf addr directly.
Is there problem when push vlan to the mbuf which reference the desc buf addr
directly?
We know if guest use
2016/7/30 21:30, Wiles, Keith :
>> On Jul 30, 2016, at 1:03 AM, linhaifeng wrote:
>>
>> hi
>>
>> I use 6 ports to send pkts in VM, but can only 4 ports work, how to enable
>> more ports to work?
>>
> In the help screen the command ?ppp [1-6]?
hi,thomas
Could you change the name of file in directory
app/test/test_pci_sysfs/bus/pci/devices/ ?
I think somebody like us also cann't access internet in liunux.Windows not
support file name
include ':'.
thanks
linhaifeng
? 2016/8/7 4:33, Jan Viktorin ??:
> On Fri, 05 Aug 2016 09:51:06 +0200
> Thomas Monjalon wrote:
>
>> 2016-08-05 09:44, Thomas Monjalon:
>>> 2016-08-05 10:09, linhaifeng:
>>>> hi,thomas
>>>>
>>>> Could you change the name of fil
Hi, Ravi Kerur
On 2015/5/9 5:19, Ravi Kerur wrote:
> Preliminary results on Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz, Ubuntu
> 14.04 x86_64 shows comparisons using AVX/SSE instructions taking 1/3rd
> CPU ticks for 16, 32, 48 and 64 bytes comparison. In addition,
I had write a program to test
On 2015/5/13 9:18, Ravi Kerur wrote:
> If you can wait until Thursday I will probably send v3 patch which will
> have full memcmp support.
Ok, I'd like to test it:)
>
> In your program try with volatile pointer and see if it helps.
like "volatile uint8_t *src, *dst" ?
On 2014/11/12 5:37, Xie, Huawei wrote:
> Hi Tetsuya:
> There are two major technical issues in my mind for vhost-user implementation.
>
> 1) memory region map
> Vhost-user passes us file fd and offset for each memory region. Unfortunately
> the mmap offset is "very" wrong. I discovered this
On 2014/11/12 5:37, Xie, Huawei wrote:
> Hi Tetsuya:
> There are two major technical issues in my mind for vhost-user implementation.
>
> 1) memory region map
> Vhost-user passes us file fd and offset for each memory region. Unfortunately
> the mmap offset is "very" wrong. I discovered this
On 2014/11/12 12:12, Tetsuya Mukawa wrote:
> Hi Xie,
>
> (2014/11/12 6:37), Xie, Huawei wrote:
>> Hi Tetsuya:
>> There are two major technical issues in my mind for vhost-user
>> implementation.
>>
>> 1) memory region map
>> Vhost-user passes us file fd and offset for each memory region.
>>
On 2014/11/14 9:28, Xie, Huawei wrote:
>
>
>> -Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>> Sent: Wednesday, November 12, 2014 11:28 PM
>> To: Xie, Huawei; 'Tetsuya Mukawa'; dev at dpdk.org
>> Subject: Re: [dp
On 2014/11/14 10:30, Tetsuya Mukawa wrote:
> Hi Lin,
>
> (2014/11/13 15:30), Linhaifeng wrote:
>> On 2014/11/12 12:12, Tetsuya Mukawa wrote:
>>> Hi Xie,
>>>
>>> (2014/11/12 6:37), Xie, Huawei wrote:
>>>> Hi Tetsuya:
>>>&
On 2014/11/14 11:40, Tetsuya Mukawa wrote:
> Hi Lin,
>
> (2014/11/14 12:13), Linhaifeng wrote:
>>
>> size should be same as mmap and
>> guest_mem -= (memory.regions[i].mmap_offset / sizeof(*guest_mem));
>>
>
> Thanks. It
On 2014/11/14 13:12, Tetsuya Mukawa wrote:
> ease try another value like 6000MB
i have try this value 6000MB.I can munmap success.
you mmap with size "memory_size + memory_offset" should also munmap with this
size.
--
Regards,
Haifeng
Hi,all
when i compile my program with dpdk there is a warning found by gcc.The message
is like follow.I don't know how to avoid it.Help me.
/usr/include/dpdk-1.7.0/x86_64-native-linuxapp-gcc//include/rte_common.h:176:
warning: cast from function call of type ?uintptr_t? to non-matching type
? 2016/10/10 16:03, Yuanhan Liu ??:
> On Sun, Oct 09, 2016 at 06:46:44PM +0800, linhaifeng wrote:
>> ? 2016/8/23 16:10, Yuanhan Liu ??:
>>> The basic idea of Tx zero copy is, instead of copying data from the
>>> desc buf, here we let the mbuf reference
If rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
---
drivers/net/bonding/rte_eth_bond_pmd.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
From: ZengGanghui
if rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
---
drivers/net/bonding/rte_eth_bond_pmd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
if rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
b/drivers/net/bonding/rte_eth_bond_pmd.c
index 43334f7..6c74bba 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++
From: Haifeng Lin
if rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
---
drivers/net/bonding/rte_eth_bond_pmd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
From: Haifeng Lin
if rx vlan offload is enable we should not handle vlan slow
packets too.
Signed-off-by: Haifeng Lin
---
drivers/net/bonding/rte_eth_bond_pmd.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
93 matches
Mail list logo