On 11/27/2011 12:07 AM, Eric Dumazet wrote:
> Le dimanche 27 novembre 2011 à 13:27 +0800, Cong Wang a écrit :
>> Signed-off-by: Cong Wang <amw...@redhat.com>
>> ---
>> diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c 
>> b/drivers/net/ethernet/intel/e1000/e1000_main.c
>> index cf480b5..b194beb 100644
>> --- a/drivers/net/ethernet/intel/e1000/e1000_main.c
>> +++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
>> @@ -3878,11 +3878,9 @@ static bool e1000_clean_jumbo_rx_irq(struct 
>> e1000_adapter *adapter,
>>                              if (length <= copybreak &&
>>                                  skb_tailroom(skb) >= length) {
>>                                      u8 *vaddr;
>> -                                    vaddr = kmap_atomic(buffer_info->page,
>> -                                                        
>> KM_SKB_DATA_SOFTIRQ);
>> +                                    vaddr = kmap_atomic(buffer_info->page);
>>                                      memcpy(skb_tail_pointer(skb), vaddr, 
>> length);
>> -                                    kunmap_atomic(vaddr,
>> -                                                  KM_SKB_DATA_SOFTIRQ);
>> +                                    kunmap_atomic(vaddr);
>>                                      /* re-use the page, so don't erase
>>                                       * buffer_info->page */
>>                                      skb_put(skb, length);
>> diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c 
>> b/drivers/net/ethernet/intel/e1000e/netdev.c
>> index a855db1..8603c87 100644
>> --- a/drivers/net/ethernet/intel/e1000e/netdev.c
>> +++ b/drivers/net/ethernet/intel/e1000e/netdev.c
>> @@ -1272,9 +1272,9 @@ static bool e1000_clean_rx_irq_ps(struct e1000_adapter 
>> *adapter,
>>                       */
>>                      dma_sync_single_for_cpu(&pdev->dev, ps_page->dma,
>>                                              PAGE_SIZE, DMA_FROM_DEVICE);
>> -                    vaddr = kmap_atomic(ps_page->page, KM_SKB_DATA_SOFTIRQ);
>> +                    vaddr = kmap_atomic(ps_page->page);
>>                      memcpy(skb_tail_pointer(skb), vaddr, l1);
>> -                    kunmap_atomic(vaddr, KM_SKB_DATA_SOFTIRQ);
>> +                    kunmap_atomic(vaddr);
>>                      dma_sync_single_for_device(&pdev->dev, ps_page->dma,
>>                                                 PAGE_SIZE, DMA_FROM_DEVICE);
>>  
>> @@ -1465,12 +1465,10 @@ static bool e1000_clean_jumbo_rx_irq(struct 
>> e1000_adapter *adapter,
>>                              if (length <= copybreak &&
>>                                  skb_tailroom(skb) >= length) {
>>                                      u8 *vaddr;
>> -                                    vaddr = kmap_atomic(buffer_info->page,
>> -                                                       KM_SKB_DATA_SOFTIRQ);
>> +                                    vaddr = kmap_atomic(buffer_info->page);
>>                                      memcpy(skb_tail_pointer(skb), vaddr,
>>                                             length);
>> -                                    kunmap_atomic(vaddr,
>> -                                                  KM_SKB_DATA_SOFTIRQ);
>> +                                    kunmap_atomic(vaddr);
>>                                      /* re-use the page, so don't erase
>>                                       * buffer_info->page */
>>                                      skb_put(skb, length);
> But why are these drivers using kmap_atomic() in first place, since
> their fragments are allocated in regular zone (GFP_ATOMIC or
> GFP_KERNEL) ?

I was asking the same thing myself recently when I started working on
some copy-break like code for the ixgbe driver.  I believe the main
reason is a lack of documentation.  This code is based loosely on the
skb_copy_bits code which will use kmap_skb_frag over all of the paged
portions of the sk_buff.  As such it was decided to map things via
kmap_atomic in order to guarantee the pages had a valid virtual address.

If I understand things correctly, what you are brining up is that pages
allocated with either GFP_ATOMIC or GFP_KERNEL will always be allocated
from the lowmem pool and as such page_address should always succeed.  Is
that correct?

Thanks,

Alex

------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to