Remove an inner one, which tended to be error prone due to the cascading
and it can be replaced by a simple if ().
Rework the outer one so that the actual flush code is not inside it. Now
we first validate if we can or cannot send data, return if not, and then
the flush code.
Suggested-by: Xin
Factor out the code for generating singletons. It's used only once, but
helps to keep the context contained.
The const variables are to ease the reading of subsequent calls in there.
Signed-off-by: Marcelo Ricardo Leitner
---
net/sctp/outqueue.c | 22
A collection of fixups from previous patches, left for later to not
introduce unnecessary changes while moving code around.
Signed-off-by: Marcelo Ricardo Leitner
---
net/sctp/outqueue.c | 20 +++-
1 file changed, 7 insertions(+), 13 deletions(-)
diff
Retransmissions may be triggered when in user context, so lets make use
of gfp.
Signed-off-by: Marcelo Ricardo Leitner
---
net/sctp/outqueue.c | 17 +
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/net/sctp/outqueue.c
Pre-compute these so the compiler won't reload them (due to
no-strict-aliasing).
Changes since v2:
- Do not replace a return with a break in sctp_outq_flush_data
Signed-off-by: Marcelo Ricardo Leitner
---
net/sctp/outqueue.c | 97
To the new sctp_outq_flush_data. Again, smaller functions and with well
defined objectives.
Signed-off-by: Marcelo Ricardo Leitner
---
net/sctp/outqueue.c | 149 ++--
1 file changed, 75 insertions(+), 74 deletions(-)
With this struct we avoid passing lots of variables around and taking care
of updating the current transport/packet.
Signed-off-by: Marcelo Ricardo Leitner
---
net/sctp/outqueue.c | 182 +---
1 file changed, 88
To the new sctp_outq_flush_transports.
Comment on Nagle is outdated and removed. Nagle is performed earlier, while
checking if the chunk fits the packet: if the outq length is not enough to
fill the packet, it returns SCTP_XMIT_DELAY.
So by when it gets to sctp_outq_flush_transports, it has to
Hi Alexander,
Thank you for the answer.
Unfortunately CentOS goes with these dinosaurs.
So we will try to debug the problem in the current one and try to
reproduce on the latest kernel.
Thanks,
Roman.
2018-05-14 18:40 GMT+03:00 Alexander Aring :
> Hi,
>
> On Sun, May 13,
From: Jeff Kirsher
Date: Mon, 14 May 2018 08:27:36 -0700
> This series contains updates to virtchnl, i40e and i40evf.
>
> Bruce cleans up whitespace and unnecessary parentheses in virtchnl.
>
> Jake does a number of stat cleanups in the i40e driver, including
>
From: Christophe JAILLET
Date: Sat, 12 May 2018 19:09:25 +0200
> 'out' is allocated with 'kvzalloc()'. 'kvfree()' must be used to free it.
>
> Signed-off-by: Christophe JAILLET
Saeed, I assume I will see this in one of your
On Sat, May 12, 2018 at 09:46:25AM +0200, Dmitry Vyukov wrote:
> On Fri, May 11, 2018 at 8:33 PM, Andrei Vagin wrote:
> > On Sat, May 05, 2018 at 10:59:02AM -0700, syzbot wrote:
> >> Hello,
> >>
> >> syzbot found the following crash on:
> >>
> >> HEAD commit:c1c07416cdd4
On Mon, May 14, 2018 at 06:05:14PM +0200, Jesper Dangaard Brouer wrote:
> On Mon, 14 May 2018 17:15:54 +0200
> Silvan Jegen wrote:
>
> > Hi
> >
> > Some typo fixes below.
> >
> > On Mon, May 14, 2018 at 3:43 PM Jesper Dangaard Brouer
> > wrote:
> > > I
On 05/14/2018 11:03 AM, Greg KH wrote:
> On Mon, May 07, 2018 at 11:54:01AM -0400, Pavel Tatashin wrote:
>> Changelog
>> v2 - v3
>> - Fixed warning from kbuild test.
>> - Moved device_lock/device_unlock inside device_shutdown_tree().
>>
>> v1 - v2
>> - It turns out we cannot lock
On 5/14/2018 9:53 AM, Tom Herbert wrote:
> On Wed, May 9, 2018 at 1:54 PM, Nambiar, Amritha
> wrote:
>> On 5/9/2018 1:31 PM, Tom Herbert wrote:
>>> On Thu, Apr 19, 2018 at 6:04 PM, Amritha Nambiar
>>> wrote:
Refactor XPS code to support
Named sctp_outq_flush_ctrl and, with that, keep the contexts contained.
One small fix embedded is the reset of one_packet at every iteration.
This allows bundling of some control chunks in case they were preceeded by
another control chunk that cannot be bundled.
Other than this, it has the same
Currently sctp_outq_flush does many different things and arguably
unrelated, such as doing transport selection and outq dequeueing.
This patchset refactors it into smaller and more dedicated functions.
The end behavior should be the same.
The next patchset will rework the function parameters.
This struct will hold all the context used during the outq flush, so we
don't have to pass lots of pointers all around.
Checked on x86_64, the compiler inlines all these functions and there is no
derreference added because of the struct.
This patchset depends on 'sctp: refactor sctp_outq_flush'
On Mon 14 May 2018 at 16:23, Jiri Pirko wrote:
> Mon, May 14, 2018 at 04:27:06PM CEST, vla...@mellanox.com wrote:
>>Without rtnl lock protection it is no longer safe to use pointer to tc
>>action without holding reference to it. (it can be destroyed concurrently)
>>
>>Remove
arps for incomplete entries can't be sent anyway.
Signed-off-by: Debabrata Banerjee
---
drivers/net/bonding/bond_alb.c | 18 --
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
On 5/14/18 12:40 PM, Ryan Whelan wrote:
> Same behavior:
>
>
> root@rwhelan-linux ~
> # ip -6 route
> ::1 dev lo proto kernel metric 256 pref medium
> fd9b:caee:ff93:ceef:3431:3831:3930:3031 dev internal0 proto kernel metric
> 256 pref medium
> fd9b:caee:ff93:ceef:3431:3831:3930:3032 dev
From: Jason Wang
Date: Fri, 11 May 2018 10:49:25 +0800
> We used to initialize ptr_ring during TUNSETIFF, this is because its
> size depends on the tx_queue_len of netdevice. And we try to clean it
> up when socket were detached from netdevice. A race were spotted when
>
The ACK filter is an optional feature of CAKE which is designed to improve
performance on links with very asymmetrical rate limits. On such links
(which are unfortunately quite prevalent, especially for DSL and cable
subscribers), the downstream throughput can be limited by the number of
ACKs
This patch series adds the CAKE qdisc, and has been split up to ease
review.
I have attempted to split out each configurable feature into its own patch.
The first commit adds the base shaper and packet scheduler, while
subsequent commits add the optional features. The full userspace API and
most
On Sun, May 13, 2018 at 5:26 PM, Daniel Borkmann wrote:
> Make the RETPOLINE_{RA,ED}X_BPF_JIT() a bit more readable by
> cleaning up the macro, aligning comments and spacing.
>
> Signed-off-by: Daniel Borkmann
> ---
>
On Mon, May 14, 2018 at 01:58:09PM +0200, Anders Roxell wrote:
> If the kernel headers aren't installed we can't build all the tests.
> Add a new make target rule 'khdr' in the file lib.mk to generate the
> kernel headers and that gets include for every test-dir Makefile that
> includes lib.mk If
From: Vivien Didelot
Date: Fri, 11 May 2018 17:16:33 -0400
> The mv88e6xxx driver is still writing arbitrary registers at setup time,
> e.g. priority override bits. Add ops for them and provide specific setup
> functions for priority and stats before getting
The ingress mode is meant to be enabled when CAKE runs downlink of the
actual bottleneck (such as on an IFB device). The mode changes the shaper
to also account dropped packets to the shaped rate, as these have already
traversed the bottleneck.
Enabling ingress mode will also tune the AQM to
sch_cake targets the home router use case and is intended to squeeze the
most bandwidth and latency out of even the slowest ISP links and routers,
while presenting an API simple enough that even an ISP can configure it.
Example of use on a cable ISP uplink:
tc qdisc add dev eth0 cake bandwidth
This adds support for DiffServ-based priority queueing to CAKE. If the
shaper is in use, each priority tier gets its own virtual clock, which
limits that tier's rate to a fraction of the overall shaped rate, to
discourage trying to game the priority mechanism.
CAKE defaults to a simple,
This commit adds configurable overhead compensation support to the rate
shaper. With this feature, userspace can configure the actual bottleneck
link overhead and encapsulation mode used, which will be used by the shaper
to calculate the precise duration of each packet on the wire.
This feature
When CAKE is deployed on a gateway that also performs NAT (which is a
common deployment mode), the host fairness mechanism cannot distinguish
internal hosts from each other, and so fails to work correctly.
To fix this, we add an optional NAT awareness mode, which will query the
kernel conntrack
At lower bandwidths, the transmission time of a single GSO segment can add
an unacceptable amount of latency due to HOL blocking. Furthermore, with a
software shaper, any tuning mechanism employed by the kernel to control the
maximum size of GSO segments is thrown off by the artificial limit on
> From: Jay Vosburgh [mailto:jay.vosbu...@canonical.com]
> Debabrata Banerjee wrote:
>
> >In a mixed environment it may be difficult to tell if your hardware
> >support carrier, if it does not it can always report true. With a new
> >use_carrier option of 2, we can check
From: ebied...@xmission.com (Eric W. Biederman)
Date: Mon, 14 May 2018 08:11:24 -0500
> David Miller writes:
>
>> I'm deferring this patch series.
>>
>> If we can't get a reasonable review from an interested party in 10+
>> days, that is not reasonable.
>>
>> Resubmit this
On Mon, May 14, 2018 at 9:34 PM, Neil Horman wrote:
> On Fri, May 11, 2018 at 12:00:38PM +0200, Dmitry Vyukov wrote:
>> On Mon, Apr 30, 2018 at 8:09 PM, syzbot
>> wrote:
>> > Hello,
>> >
>> > syzbot found the following
Same behavior:
root@rwhelan-linux ~
# ip -6 route
::1 dev lo proto kernel metric 256 pref medium
fd9b:caee:ff93:ceef:3431:3831:3930:3031 dev internal0 proto kernel metric
256 pref medium
fd9b:caee:ff93:ceef:3431:3831:3930:3032 dev internal0 src
fd9b:caee:ff93:ceef:3431:3831:3930:3031 metric 1024
Replace homegrown mac addr checks with faster defs from etherdevice.h
Note that this will also prevent any rlb arp updates for multicast
addresses, however this should have been forbidden anyway.
Signed-off-by: Debabrata Banerjee
---
drivers/net/bonding/bond_alb.c | 28
The rx load balancing provided by balance-alb is not mutually
exclusive with using hashing for tx selection, and should provide a decent
speed increase because this eliminates spinlocks and cache contention.
Signed-off-by: Debabrata Banerjee
---
Series of fixes to how rlb updates are handled, code cleanup, allowing
higher performance tx hashing in balance-alb mode, and reliability of
link up/down monitoring.
v2: refactor bond_is_nondyn_tlb with inline fn, update log comment to
point out that multicast addresses will not get rlb updates.
In a mixed environment it may be difficult to tell if your hardware
support carrier, if it does not it can always report true. With a new
use_carrier option of 2, we can check both carrier and link status
sequentially, instead of one or the other
Signed-off-by: Debabrata Banerjee
On 5/14/2018 3:41 PM, Jason Gunthorpe wrote:
> On Mon, May 07, 2018 at 08:53:16AM -0700, Steve Wise wrote:
>> This enhancement allows printing rdma device-specific state, if provided
>> by the kernel. This is done in a generic manner, so rdma tool doesn't
>> need to know about the details of
Hi,
In RHEVM we use traffic control to enable users to shape their QoS on
the networks they use.
IIUC from the traffic control man page, the maximal bit rate currently
supported (RHEL 7.4) is 2^32 - 1 bytes per sec, which translates to 32
Gibit\s if I have done my maths correctly.
Does anyone
From: Stephen Hemminger
commit 7b2ee50c0cd513a176a26a71f2989facdd75bfea upstream
Make common function for detaching internals of device
during changes to MTU and RSS. Make sure no more packets
are transmitted and all packets have been received before
doing device
From: Stephen Hemminger
commit a7483ec0267c69b34e818738da60b392623da94b upstream
Block setup of multiple channels earlier in the teardown
process. This avoids possible races between halt and subchannel
initialization.
Suggested-by: Haiyang Zhang
These patches are backport of latest stability related patches
from upstream. Although it looks like a lot it encompasses
three main areas:
1. The set of patches to get rid of races when MTU or number
of queues is changed while device is up. And make this
work on older versions of
From: Vitaly Kuznetsov
commit 0cf737808ae7cb25e952be619db46b9147a92f46 upstream.
It was found that in some cases host refuses to teardown GPADL for send/
receive buffers (probably when some work with these buffere is scheduled or
ongoing). Change the teardown logic to be:
From: Stephen Hemminger
commit 8348e0460ab1473f06c8b824699dd2eed3c1979d upstream
This makes sure that no CPU is still process packets when
the channel is closed.
Fixes: 76bb5db5c749 ("netvsc: fix use after free on module removal")
Signed-off-by: Stephen Hemminger
From: Stephen Hemminger
commit 02400fcee2542ee334a2394e0d9f6efd969fe782 upstream
The receive processing may continue to happen while the
internal network device state is in RCU grace period.
The internal RNDIS structure is associated with the
internal netvsc_device
From: Stephen Hemminger
commit 0ef58b0a05c127762f975c3dfe8b922e4aa87a29 upstream
On older versions of Windows, the host ignores messages after
vmbus channel is closed.
Workaround this by doing what Windows does and send the teardown
before close on older versions of
From: Mohammed Gamal
commit a56d99d714665591fed8527b90eef21530ea61e0 upstream
Prior to commit 0cf737808ae7 ("hv_netvsc: netvsc_teardown_gpadl() split")
the call sequence in netvsc_device_remove() was as follows (as
implemented in netvsc_destroy_buf()):
1- Send
From: Stephen Hemminger
commit f4950e4586dfc957e0a28226eeb992ddc049b5a2 upstream
Don't wake transmit queues if link is not up yet.
Signed-off-by: Stephen Hemminger
Signed-off-by: David S. Miller
---
From: Vitaly Kuznetsov
commit aefd80e874e98a864915df5b7d90824a4340b450 upstream.
rndis_filter_device_add() is called both from netvsc_probe() when we
initially create the device and from set channels/mtu/ringparam
routines where we basically remove the device and add it
From: Stephen Hemminger
commit cfd8afd986cdb59ea9adac873c5082498a1eb7c0 upstream
If the transmit queue is known full, then don't keep aggregating
data. And the cp_partial flag which indicates that the current
aggregation buffer is full can be folded in to avoid more
From: Haiyang Zhang
commit 6450f8f269a9271985e4a8c13920b7e4cf21c0f3 upstream.
For older hosts without multi-channel (vRSS) support, and some error
cases, we still need to set the real number of queues to one.
This patch adds this missing setting.
Fixes: 8195b1396ec8
From: Stephen Hemminger
commit b3bf5666a51068ad5ddd89a76ed877101ef3bc16 upstream
When VF is used for accelerated networking it will likely have
more queues (and different policy) than the synthetic NIC.
This patch defers the queue policy to the VF so that all the
From: Stephen Hemminger
commit 97f3efb64323beb0690576e9d74e94998ad6e82a upstream
The hyper-v transparent bonding should have used master_dev_link.
The netvsc device should look like a master bond device not
like the upper side of a tunnel.
This makes the semantics
From: Mohammed Gamal
commit 55be9f25be1ca5bda75c39808fc77e42691bc07f upstream
On older windows hosts the net_device instance is returned to
the caller of rndis_filter_device_add() without having the presence
bit set first. This would cause any subsequent calls to network
From: Mohammed Gamal
commit 2afc5d61a7197de25a61f54ea4ecfb4cb62b1d42A upstram
When changing network interface settings, Windows guests
older than WS2016 can no longer shutdown. This was addressed
by commit 0ef58b0a05c12 ("hv_netvsc: change GPAD teardown order
on older
On 05/12/2018 07:25 PM, Mathieu Xhonneux wrote:
[...]
> +BPF_CALL_4(bpf_lwt_seg6_store_bytes, struct sk_buff *, skb, u32, offset,
> +const void *, from, u32, len)
> +{
> +#if IS_ENABLED(CONFIG_IPV6_SEG6_BPF)
> + struct seg6_bpf_srh_state *srh_state =
> +
From: Or Gerlitz
Reaching here, means we didn't err anywhere, so lets just
return success.
Signed-off-by: Or Gerlitz
Reviewed-by: Jianbo Liu
Signed-off-by: Saeed Mahameed
---
From: Or Gerlitz
Use local actions variable while parsing the actions of offloaded TC flow.
Signed-off-by: Or Gerlitz
Reviewed-by: Roi Dayan
Signed-off-by: Saeed Mahameed
---
From: Or Gerlitz
This is not needed as the attributes are zeroed out on allocation.
Signed-off-by: Or Gerlitz
Reviewed-by: Jianbo Liu
Signed-off-by: Saeed Mahameed
---
In delete vlan flow an extra call to mlx5e_vport_context_update_vlans
was added by mistake, remove it.
Fixes: 86d722ad2c3b ("net/mlx5: Use flow steering infrastructure for mlx5_en")
Signed-off-by: Saeed Mahameed
Reviewed-by: Gal Pressman
---
Avoid using the kernel's irq_descriptor and return IRQ vector affinity
directly from the driver.
This fixes the following build break when CONFIG_SMP=n
include/linux/mlx5/driver.h: In function ‘mlx5_get_vector_affinity_hint’:
include/linux/mlx5/driver.h:1299:13: error:
‘struct irq_desc’
On 05/14/2018 04:07 PM, Willem de Bruijn wrote:
> From: Willem de Bruijn
>
> Paged allocation stores most payload in skb frags. This helps udp gso
> by avoiding copying from the gso skb to segment skb in skb_segment.
>
> But without scatter-gather, data must be linear, so
On Mon, May 14, 2018 at 7:07 PM, Willem de Bruijn
wrote:
> From: Willem de Bruijn
>
> Until the udp receive stack supports large packets (UDP GRO), GSO
> packets must not loop from the egress to the ingress path.
>
> Revert the change that
From: Willem de Bruijn
Until the udp receive stack supports large packets (UDP GRO), GSO
packets must not loop from the egress to the ingress path.
Revert the change that added NETIF_F_GSO_UDP_L4 to various virtual
devices through NETIF_F_GSO_ENCAP_ALL as this included
On 05/14/2018 04:30 PM, Willem de Bruijn wrote:
> I don't quite follow. The reported crash happens in the protocol layer,
> because of this check. With pagedlen we have not allocated
> sufficient space for the skb_put.
>
> if (!(rt->dst.dev->features_F_SG)) {
>
On Sat, May 12, 2018 at 9:58 PM, Richard Guy Briggs wrote:
> Recognizing that the audit context is an internal audit value, use an
> access function to retrieve the audit context pointer for the task
> rather than reaching directly into the task struct to get it.
>
>
On Mon, May 14, 2018 at 10:58:44PM +0200, Andrew Lunn wrote:
> Hi Alexandre
> >
> > The ocelot dts changes are here for reference and should probably go
> > through the MIPS tree once the bindings are accepted.
>
> For your next version, you probably want to drop those patches, so
> that David
From: Or Gerlitz
Introduce levels of matching on headers of offloaded flows
(none, L2, L3, L4) that follow the inline mode levels.
This is pre-step for us to offload flows without any
matches on headers.
Signed-off-by: Or Gerlitz
Reviewed-by: Roi
From: Gal Pressman
MLX5E_TEST_BIT macro is the same as the already existent test_bit,
remove it and replace all usages.
Signed-off-by: Gal Pressman
Signed-off-by: Tariq Toukan
Signed-off-by: Saeed Mahameed
---
From: Gal Pressman
Replace (mask & bit) check with test_bit.
Signed-off-by: Gal Pressman
Signed-off-by: Tariq Toukan
Signed-off-by: Saeed Mahameed
---
drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h
From: Eran Ben Elisha
Report all channels which got timeout on posting the minimal number of
RX WQEs and not only the first one. Avoid busy wait on every channel,
when one of the RQs check got timeout, poll once for the remaining RQs.
In addition, add channel index to log
From: Gal Pressman
Make the code more clear by replacing the existing code with __set_bit.
Signed-off-by: Gal Pressman
Signed-off-by: Tariq Toukan
Signed-off-by: Saeed Mahameed
---
On 05/14/2018 03:15 PM, Eitan Raviv wrote:
> Hi,
>
> In RHEVM we use traffic control to enable users to shape their QoS on
> the networks they use.
>
> IIUC from the traffic control man page, the maximal bit rate currently
> supported (RHEL 7.4) is 2^32 - 1 bytes per sec, which translates to
>> Paged skbuffs is an optimization for gso, but the feature should
>> continue to work even if gso skbs are linear, indeed (if at the cost
>> of copying during skb_segment).
>>
>> We need to make paged contingent on scatter-gather. Rough
>> patch below. That is for ipv4 only, the same will be
From: Willem de Bruijn
Paged allocation stores most payload in skb frags. This helps udp gso
by avoiding copying from the gso skb to segment skb in skb_segment.
But without scatter-gather, data must be linear, so do not use paged
mode unless NETIF_F_SG.
Fixes: 15e36f5b8e98
On 05/14/2018 12:20 PM, Jesper Dangaard Brouer wrote:
>
> On Mon, 14 May 2018 17:29:15 +0900 Prashant Bhole
> wrote:
>
>> Updated optstring parameter for getopt_long() to accept short options.
>> Also updated usage() function.
>>
>> Signed-off-by: Prashant
From: Willem de Bruijn
UDP GSO conflicts with transformations in the XFRM layer.
Return an error if GSO is attempted.
Fixes: bec1f6f69736 ("udp: generate gso with UDP_SEGMENT")
CC: Michal Kubecek
Signed-off-by: Willem de Bruijn
---
From: Willem de Bruijn
A few small fixes:
- disallow segmentation with XFRM
- do not leak gso packets into the ingress path
- fix a panic if scatter-gather is disabled
Willem de Bruijn (3):
udp: exclude gso from xfrm paths
gso: limit udp gso to egress-only virtual
On Sat, May 12, 2018 at 9:58 PM, Richard Guy Briggs wrote:
> The audit-related parameters in struct task_struct should ideally be
> collected together and accessed through a standard audit API.
>
> Collect the existing loginuid, sessionid and audit_context together in a
> new
From: Mohammed Gamal
commit 7992894c305eaf504d005529637ff8283d0a849d upstream
Split each of the functions into two for each of send/recv buffers.
This will be needed in order to implement a fine-grained messaging
sequence to the host so that we accommodate the requirements of
From: Haiyang Zhang
commit 47371300dfc269dd8d150e5b872bdbbda98ba809 upstream.
Rename this variable because it is the Receive indirection
table.
Signed-off-by: Haiyang Zhang
Signed-off-by: David S. Miller
---
From: Haiyang Zhang
commit 6b0cbe315868d613123cf387052ccda5f09d49ea upstream.
tx_table is part of the private data of kernel net_device. It is only
zero-ed out when allocating net_device.
We may recreate netvsc_device w/o recreating net_device, so the private
netdev
From: Haiyang Zhang
commit 25a39f7f975c3c26a0052fbf9b59201c06744332 upstream
Since we no longer localize channel/CPU affiliation within one NUMA
node, num_online_cpus() is used as the number of channel cap, instead of
the number of processors in a NUMA node.
This patch
From: Stephen Hemminger
commit 12f69661a49446840d742d8feb593ace022d9f66 upstream
Change the initialization order so that the device is ready to transmit
(ie connect vsp is completed) before setting the internal reference
to the device with RCU.
This avoids any races
From: Haiyang Zhang
commit 39e91cfbf6f5fb26ba64cc2e8874372baf1671e7 upstream.
Simplify the variable name: tx_send_table
Signed-off-by: Haiyang Zhang
Signed-off-by: David S. Miller
---
drivers/net/hyperv/hyperv_net.h | 2 +-
From: Haiyang Zhang
commit a6fb6aa3cfa9047b62653dbcfc9bcde6e2272b41 upstream.
In some cases, like internal vSwitch, the host doesn't provide
send indirection table updates. This patch sets the table to be
equal weight after subchannels are all open. Otherwise, all
From: Stephen Hemminger
commit d64e38ae690e3337db0d38d9b149a193a1646c4b upstream
There is a race between napi_reschedule and re-enabling interrupts
which could lead to missed host interrrupts. This occurs when
interrupts are re-enabled (hv_end_read) and vmbus irq
From: Stephen Hemminger
commit fcfb4a00d1e514e8313277a01ef919de1113025b upstream
Need to delete NAPI association if vmbus_open fails.
Signed-off-by: Stephen Hemminger
Signed-off-by: David S. Miller
---
On 2018-05-11 17:16, Willem de Bruijn wrote:
Hmm, no, we absolutely need to fix GSO instead.
Think of a bonding device (or any virtual devices), your patch wont
avoid the crash.
Hi Eric. Can you clarify what you mean by "fix GSO?" Is that just having
the GSO path work
regardless of
On 05/07/2018 07:50 PM, Song Liu wrote:
> Changes v2 -> v3:
> Improve syntax based on suggestion by Tobin C. Harding.
>
> Changes v1 -> v2:
> 1. Rename some variables to (hopefully) reduce confusion;
> 2. Check irq_work status with IRQ_WORK_BUSY (instead of work->sem);
> 3. In Kconfig,
On 05/14/2018 03:38 PM, Saeed Mahameed wrote:
> Avoid using the kernel's irq_descriptor and return IRQ vector affinity
> directly from the driver.
>
> This fixes the following build break when CONFIG_SMP=n
>
> include/linux/mlx5/driver.h: In function ‘mlx5_get_vector_affinity_hint’:
>
On 05/14/2018 02:10 PM, Sean Young wrote:
> Add support for BPF_PROG_IR_DECODER. This type of BPF program can call
Kconfig file below uses IR_BPF_DECODER instead of the symbol name above.
and then patch 3 says a third choice:
The context provided to a BPF_PROG_RAWIR_DECODER is a struct
On Mon, May 14, 2018 at 7:12 PM, Eric Dumazet wrote:
>
>
> On 05/14/2018 04:07 PM, Willem de Bruijn wrote:
>> From: Willem de Bruijn
>>
>> Paged allocation stores most payload in skb frags. This helps udp gso
>> by avoiding copying from the gso skb to
Hi Vlad,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on net/master]
[also build test WARNING on v4.17-rc5 next-20180514]
[cannot apply to net-next/master]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url
On Sat, May 12, 2018 at 9:58 PM, Richard Guy Briggs wrote:
> Recognizing that the audit context is an internal audit value, use an
> access function to set the audit context pointer for the task
> rather than reaching directly into the task struct to set it.
>
> Signed-off-by:
Hi Jakub,
On Mon, 14 May 2018 13:41:40 -0700 Jakub Kicinski
wrote:
>
> On Mon, 14 May 2018 11:57:00 +1000, Stephen Rothwell wrote:
> > diff --cc tools/lib/bpf/libbpf.c
> > index 8da4eeb101a6,df54c4c9e48a..
> > --- a/tools/lib/bpf/libbpf.c
> > +++
201 - 300 of 367 matches
Mail list logo