Hello, Christophe.
On Sun, Apr 18, 2021 at 11:25:52PM +0200, Christophe JAILLET wrote:
> Improve 'create_workqueue', 'create_freezable_workqueue' and
> 'create_singlethread_workqueue' so that they accept a format
> specifier and a variable number of arguments.
>
> This will put these macros more
Hello,
This generally looks fine to me. Some nits below.
On Tue, Oct 06, 2020 at 03:06:07PM +0300, Tariq Toukan wrote:
> @@ -432,6 +433,9 @@ struct workqueue_struct *alloc_workqueue(const char *fmt,
> WQ_MEM_RECLAIM, 1, (name))
> #define create_singlethread_workqueue(name)
On Mon, Nov 09, 2020 at 04:01:11PM +0530, Bhaskar Chowdhury wrote:
> Few spelling fixes throughout the file.
>
> Signed-off-by: Bhaskar Chowdhury
Applied to cgroup/for-5.10-fixes.
Thanks.
--
tejun
ly we do the mode switch when someone writes to the ifpriomap cgroup
> control file. The easiest fix is to also do the switch when a task is attached
> to a new cgroup.
>
> Fixes: bd1060a1d671("sock, cgroup: add sock->sk_cgroup")
> Reported-by: Yang Yingliang
> Tested-by: Yang Yingliang
> Signed-off-by: Zefan Li
Acked-by: Tejun Heo
Thanks.
--
tejun
Hello, Yang.
On Sat, May 02, 2020 at 06:27:21PM +0800, Yang Yingliang wrote:
> I find the number nr_dying_descendants is increasing:
> linux-dVpNUK:~ # find /sys/fs/cgroup/ -name cgroup.stat -exec grep
> '^nr_dying_descendants [^0]' {} +
> /sys/fs/cgroup/unified/cgroup.stat:nr_dying_descendants 8
Hello,
On Tue, Jan 22, 2019 at 09:33:10AM -0800, Eric Dumazet wrote:
> > 1) Before:
> >
> > Samples: 63K of event 'cycles:ppp', Event count (approx.): 48796480071
> > Children Self Co Shared Object Symbol
> > + 21.19% 3.38% tc [kernel.vmlinux] [k] pcpu_alloc
> > +3.45%
On Thu, Oct 18, 2018 at 10:56:17AM +0200, Michal Hocko wrote:
> From: Michal Hocko
>
> We have seen a customer complaining about soft lockups on !PREEMPT
> kernel config with 4.4 based kernel
...
> If a cgroup has many tasks with many open file descriptors then we would
> end up in a large loop w
Hello,
First of all, thanks a lot for the contribution. It'd be great to
have a korean translation. I glanced over it and there often were
areas where it was a bit challenging to understand without going back
to the english version.
I think this would still be helpful and a step in the right di
Hello, Andrey.
On Fri, Aug 10, 2018 at 10:35:23PM -0700, Andrey Ignatov wrote:
> +static inline struct cgroup *cgroup_ancestor(struct cgroup *cgrp,
> + int ancestor_level)
> +{
> + struct cgroup *ptr;
> +
> + if (cgrp->level < ancestor_level)
> +
On Fri, Feb 23, 2018 at 08:12:42AM -0800, Eric Dumazet wrote:
> From: Eric Dumazet
>
> When a large BPF percpu map is destroyed, I have seen
> pcpu_balance_workfn() holding cpu for hundreds of milliseconds.
>
> On KASAN config and 112 hyperthreads, average time to destroy a chunk
> is ~4 ms.
>
On Mon, Feb 12, 2018 at 09:03:25AM -0800, Tejun Heo wrote:
> Hello, Daniel.
>
> On Mon, Feb 12, 2018 at 06:00:13PM +0100, Daniel Borkmann wrote:
> > [ +Dennis, +Tejun ]
> >
> > Looks like we're stuck in percpu allocator with key/value size of 4 bytes
>
Hello, Daniel.
On Mon, Feb 12, 2018 at 06:00:13PM +0100, Daniel Borkmann wrote:
> [ +Dennis, +Tejun ]
>
> Looks like we're stuck in percpu allocator with key/value size of 4 bytes
> each and large number of entries (max_entries) in the reproducer in above
> link.
>
> Could we have some __GFP_NOR
Hello,
On Wed, Oct 18, 2017 at 04:45:08PM -0500, Dennis Zhou wrote:
> I'm not sure I see the reason we can't match the minimum allocation size
> with the unit size? It seems weird to arbitrate the maximum allocation
> size given a lower bound on the unit size.
idk, it can be weird for the maximum
Hello, Daniel.
(cc'ing Dennis)
On Tue, Oct 17, 2017 at 04:55:51PM +0200, Daniel Borkmann wrote:
> The set fixes a splat in devmap percpu allocation when we alloc
> the flush bitmap. Patch 1 is a prerequisite for the fix in patch 2,
> patch 1 is rather small, so if this could be routed via -net, f
On Mon, Oct 09, 2017 at 06:01:37AM -0700, Eric Dumazet wrote:
> From: Eric Dumazet
>
> per cpu allocations are already zeroed, no need to clear them again.
>
> Fixes: d52d3997f843f ("ipv6: Create percpu rt6_info")
> Signed-off-by: Eric Dumazet
> Cc: Martin KaFai
Hello,
On Thu, Sep 28, 2017 at 03:45:38PM +0100, Mark Rutland wrote:
> > Perhaps the pr_warn() should be ratelimited; or could there be an
> > option where we only return NULL, not triggering a warn at all (which
> > would likely be what callers might do anyway when checking against
> > PCPU_MIN_U
Hello,
On Thu, Sep 28, 2017 at 12:27:28PM +0100, Mark Rutland wrote:
> diff --git a/mm/percpu.c b/mm/percpu.c
> index 59d44d6..f731c45 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -1355,8 +1355,13 @@ static void __percpu *pcpu_alloc(size_t size, size_t
> align, bool reserved,
> bit
Hello, Alexei.
On Thu, Aug 31, 2017 at 08:27:56PM -0700, Alexei Starovoitov wrote:
> > > The 2 flags are completely independent. The existing override logic is
> > > unchanged. If a program can not be overridden, then the new recursive
> > > flag is irrelevant.
> >
> > I'm not sure all four combo
Hello, David, Alexei.
Sorry about late reply.
On Sun, Aug 27, 2017 at 08:49:23AM -0600, David Ahern wrote:
> On 8/25/17 8:49 PM, Alexei Starovoitov wrote:
> >
> >> + if (prog && curr_recursive && !new_recursive)
> >> + /* if a parent has recursive prog attached, only
> >> + *
Hello,
On Wed, Jul 12, 2017 at 03:31:02PM +0200, Arnd Bergmann wrote:
> > We also have about a bazillion
> >
> > warning: ‘*’ in boolean context, suggest ‘&&’ instead
> >
> > warnings in drivers/ata/libata-core.c, all due to a single macro that
> > uses a pattern that gcc-7.1.1 doesn't like. T
On Mon, Jun 26, 2017 at 12:07:30AM -0700, Christoph Hellwig wrote:
> Tejun, does this look ok to you?
Acked-by: Tejun Heo
Thanks.
--
tejun
On Mon, Jun 26, 2017 at 12:07:15AM -0700, Christoph Hellwig wrote:
> Tejun, does this look ok to you?
Sure,
Acked-by: Tejun Heo
Thanks.
--
tejun
code the helper into write_classid().
Reported-by: David Goode
Fixes: 3b13758f51de ("cgroups: Allow dynamically changing net_classid")
Cc: sta...@vger.kernel.org # v4.4+
Cc: Nina Schiff
Cc: David S. Miller
Signed-off-by: Tejun Heo
---
Hello, Dave.
Can you please route this fix for v4.11? I can
Hello,
On Mon, Feb 13, 2017 at 08:04:57AM +1300, Eric W. Biederman wrote:
> > Yeah, the whole thing never considered netns or delegation. Maybe the
> > read function itself should probably filter on the namespace of the
> > reader? I'm not completely sure whether trying to fix it won't cause
> >
98494be ("cgroup: add support for eBPF programs")
> Signed-off-by: Alexei Starovoitov
The cgroup part looks good to me. Please feel free to add
Acked-by: Tejun Heo
One question tho. Is there a specific reason to disallow attaching
!overridable program under an overridable one? Isn't disallowing
attaching programs if the closest ancestor is !overridable enough?
Thanks.
--
tejun
Hello,
On Sun, Feb 05, 2017 at 11:05:36PM -0800, Cong Wang wrote:
> > To be more specific, the read operation of net_prio.ifpriomap is handled by
> > the
> > function read_priomap. Tracing from this function, we can find it invokes
> > for_each_netdev_rcu and set the first parameter as the addres
Hello, Eric.
On Thu, Jan 26, 2017 at 01:45:07PM +1300, Eric W. Biederman wrote:
> > Eric, does this sound okay to you? You're the authority on exposing
> > things like namespace ids to users.
>
> *Boggle* Things that run across all network namespaces break any kind
> of sense I have about thin
Hello, Andy.
On Tue, Jan 24, 2017 at 10:54:03AM -0800, Andy Lutomirski wrote:
> Tejun, I can see two basic ways that cgroup+bpf delegation could work
> down the road. Both depend on unprivileged users being able to load
> BPF programs of the correct type, but that's mostly a matter of
> auditing
Hello, Andy.
On Wed, Jan 18, 2017 at 04:18:04PM -0800, Andy Lutomirski wrote:
> To map cgroup -> hook, a simple array in each cgroup structure works.
> To map (cgroup, netns) -> hook function, the natural approach would be
> to have some kind of hash, and that would be slower. This code is
> inte
Helloo, Andy.
On Mon, Jan 16, 2017 at 09:18:36PM -0800, Andy Lutomirski wrote:
> Perhaps this is a net feature, though, not a cgroup feature. This
> would IMO make a certain amount of sense. Iptables cgroup matches,
> for example, logically are an iptables (i.e., net) feature. The
Yeap.
> pro
Hello, Michal.
On Tue, Jan 17, 2017 at 02:58:30PM +0100, Michal Hocko wrote:
> This would require using hierarchical cgroup iterators to iterate over
It does behave hierarchically.
> tasks. As per Andy's testing this doesn't seem to be the case. I haven't
That's not what Andy's testing showed.
Hello,
Sorry about the delay. Some fire fighthing followed the holidays.
On Tue, Jan 03, 2017 at 11:25:59AM +0100, Michal Hocko wrote:
> > So from what I understand the proposed cgroup is not in fact
> > hierarchical at all.
> >
> > @TJ, I thought you were enforcing all new cgroups to be proper
Hello, John.
On Thu, Dec 08, 2016 at 09:39:38PM -0800, John Stultz wrote:
> So just to clarify the discussion for my purposes and make sure I
> understood, per-cgroup CAP rules was not desired, and instead we
> should either utilize an existing cap (are there still objections to
> CAP_SYS_RESOURCE
Hello,
On Tue, Dec 06, 2016 at 10:13:53AM -0800, Andy Lutomirski wrote:
> > Delegation is an explicit operation and reflected in the ownership of
> > the subdirectories and cgroup interface files in them. The
> > subhierarchy containment is achieved by requiring the user who's
> > trying to migra
Hello,
On Tue, Dec 06, 2016 at 09:01:17AM -0800, Andy Lutomirski wrote:
> How would one be granted the right to move processes around in one's
> own subtree?
Through expicit delegation - chowning of the directory and
cgroup.procs file.
> Are you imagining that, if you're in /a/b and you want to
Hello, Serge.
On Mon, Dec 05, 2016 at 08:00:11PM -0600, Serge E. Hallyn wrote:
> > I really don't know. The cgroupfs interface is a bit unfortunate in
> > that it doesn't really express the constraints. To safely migrate a
> > task, ISTM you ought to have some form of privilege over the task
> >
Hello,
On Mon, Dec 05, 2016 at 04:36:51PM -0800, Andy Lutomirski wrote:
> I really don't know. The cgroupfs interface is a bit unfortunate in
> that it doesn't really express the constraints. To safely migrate a
> task, ISTM you ought to have some form of privilege over the task
> *and* some for
On Tue, Sep 27, 2016 at 08:42:42AM -0700, Shaohua Li wrote:
> put_cpu_var takes the percpu data, not the data returned from
> get_cpu_var.
>
> This doesn't change the behavior.
>
> Cc: Tejun Heo
> Signed-off-by: Shaohua Li
Acked-by: Tejun Heo
Thanks.
--
tejun
On Tue, Sep 27, 2016 at 08:42:41AM -0700, Shaohua Li wrote:
> put_cpu_var takes the percpu data, not the data returned from
> get_cpu_var.
>
> This doesn't change the behavior.
>
> Cc: Tejun Heo
> Cc: Alexei Starovoitov
> Signed-off-by: Shaohua Li
Acked-by: Tejun Heo
Thanks.
--
tejun
Hello,
On Tue, Sep 13, 2016 at 08:14:40PM +0200, Jiri Slaby wrote:
> I assume Dmitry sees the same what I am still seeing, so I reported this
> some time ago:
> https://lkml.org/lkml/2016/3/21/492
>
> This warning is trigerred there and still occurs with "HEAD":
> (pwq != wq->dfl_pwq) && (pwq->
mory controller callbacks to
> match the cgroup core callbacks, then move them to the same place.
>
> Signed-off-by: Johannes Weiner
For 1-3,
Acked-by: Tejun Heo
Thanks.
--
tejun
Hello,
On Sat, Sep 10, 2016 at 11:33:48AM +0200, Dmitry Vyukov wrote:
> Hit the WARNING with the patch. It showed "Showing busy workqueues and
> worker pools:" after the WARNING, but then no queue info. Was it
> already destroyed and removed from the list?...
Hmm... It either means that the work
0025b0f20def27b1a09dff7 Mon Sep 17 00:00:00 2001
From: Tejun Heo
Date: Mon, 5 Sep 2016 08:54:06 -0400
Subject: [PATCH] workqueue: dump workqueue state on sanity check failures in
destroy_workqueue()
destroy_workqueue() performs a number of sanity checks to ensure that
the workqueue is empty before p
s under memory pressure.
>
> Signed-off-by: Bhaktipriya Shridhar
Acked-by: Tejun Heo
Thanks.
--
tejun
o replace the deprecated
> create_singlethread_workqueue instance.
>
> The WQ_MEM_RECLAIM flag has been set to ensure forward progress under
> memory pressure since it's a network driver.
>
> Since there are fixed number of work items, explicit concurrency
> limit is unnecessary here.
>
> S
een used.
>
> Since, it is a network driver, WQ_MEM_RECLAIM has been set to
> ensure forward progress under memory pressure.
>
> Signed-off-by: Bhaktipriya Shridhar
Acked-by: Tejun Heo
Thanks.
--
tejun
Hello, Sargun.
On Mon, Aug 29, 2016 at 11:49:07AM -0700, Sargun Dhillon wrote:
> It would be a separate hook per LSM hook. Why wouldn't we want a separate bpf
> hook per lsm hook? I think if one program has to handle them all, the first
> program would be looking up the hook program in a bpf pro
Hello,
On Mon, Aug 29, 2016 at 04:47:07AM -0700, Sargun Dhillon wrote:
> This patch adds a minor LSM, Checmate. Checmate is a flexible programmable,
> extensible minor LSM that's coupled with cgroups and BPF. It is designed to
> enforce container-specific policies. It is also a cgroupv2 controller
Hello,
On Fri, Aug 26, 2016 at 07:20:35AM -0700, Andy Lutomirski wrote:
> > This is simply the action of changing the owner of cgroup sysfs files to
> > allow an unprivileged user to handle them (cf. Documentation/cgroup-v2.txt)
>
> As far as I can tell, Tejun and systemd both actively discourage
Hello,
On Thu, Aug 25, 2016 at 04:44:13PM +0200, Mickaël Salaün wrote:
> I tested with cgroup-v2 but indeed, it seems a bit different with
> cgroup-v1 :)
> Does anyone know how to handle both cases?
If you wanna do cgroup membership test, just do cgroup v2 membership
test. No need to introduce a
Hello, Sargun.
On Sun, Aug 21, 2016 at 01:14:22PM -0700, Sargun Dhillon wrote:
> So, casually looking at this patch, it looks like you're relying on
> sock_cgroup_data, which only points to the default hierarchy. If someone uses
> net_prio or net_classid, cgroup_sk_alloc_disable is called, and t
On Thu, Aug 25, 2016 at 12:09:20PM -0400, Tejun Heo wrote:
> ebpf approach does have its shortcomings for sure but mending them
> seems a lot more manageable and future-proof than going with fixed but
> constantly expanding set of operations. e.g. We can add per-cgroup
> bpf progra
Hello, Mahesh.
On Thu, Aug 25, 2016 at 08:54:19AM -0700, Mahesh Bandewar (महेश बंडेवार) wrote:
> In short most of the associated problems are handled by the
> cgroup-infra / APIs while all that need separate solution in
> alternatives. Tejun, feels like I'm advocating cgroup approach to you
> ;)
Hello,
On Wed, Aug 24, 2016 at 10:24:20PM +0200, Daniel Mack wrote:
> SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int,
> size)
> {
> union bpf_attr attr = {};
> @@ -888,6 +957,16 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *,
> uattr, unsigned int,
Hello, Daniel.
On Wed, Aug 24, 2016 at 10:24:19PM +0200, Daniel Mack wrote:
> +void cgroup_bpf_free(struct cgroup *cgrp)
> +{
> + unsigned int type;
> +
> + rcu_read_lock();
> +
> + for (type = 0; type < __MAX_BPF_ATTACH_TYPE; type++) {
> + if (!cgrp->bpf.prog[type])
> +
Hello, Anoop.
On Wed, Aug 10, 2016 at 05:53:13PM -0700, Anoop Naravaram wrote:
> This patchset introduces a cgroup controller for the networking subsystem as a
> whole. As of now, this controller will be used for:
>
> * Limiting the specific ports that a process in a cgroup is allowed to bind
>
Hello,
On Wed, Aug 17, 2016 at 10:50:40AM -0700, Alexei Starovoitov wrote:
> > +config CGROUP_BPF
> > + bool "Enable eBPF programs in cgroups"
> > + depends on BPF_SYSCALL
> > + help
> > + This options allows cgroups to accommodate eBPF programs that
> > + can be used for network tra
Hello, again.
Just one addition.
On Wed, Aug 17, 2016 at 04:35:24PM +0200, Daniel Mack wrote:
> created. To bring this more in line with how cgroups usually work, I
> guess we should add code to copy over the bpf programs from the ancestor
> once a new cgroup is instantiated.
Please don't copy t
Hello,
On Wed, Aug 17, 2016 at 04:35:24PM +0200, Daniel Mack wrote:
> > Wouldn't it make sense to update the descendants to point to the
> > programs directly so that there's no need to traverse the tree on each
> > packet? It's more state to maintain but I think it would make total
> > sense giv
On Wed, Aug 17, 2016 at 04:36:02PM +0200, Daniel Mack wrote:
> On 08/17/2016 04:23 PM, Tejun Heo wrote:
> > On Wed, Aug 17, 2016 at 04:00:47PM +0200, Daniel Mack wrote:
> >> @@ -78,6 +116,12 @@ int sk_filter_trim_cap(struct sock *sk, struct sk_buff
> >> *skb, u
On Wed, Aug 17, 2016 at 04:00:47PM +0200, Daniel Mack wrote:
> @@ -78,6 +116,12 @@ int sk_filter_trim_cap(struct sock *sk, struct sk_buff
> *skb, unsigned int cap)
> if (skb_pfmemalloc(skb) && !sock_flag(sk, SOCK_MEMALLOC))
> return -ENOMEM;
>
> +#ifdef CONFIG_CGROUP_BPF
> +
Hello, Daniel.
On Wed, Aug 17, 2016 at 04:00:46PM +0200, Daniel Mack wrote:
> The current implementation walks the tree from the passed cgroup up
> to the root. If there is any program of the given type installed in
> any of the ancestors, the installation is rejected. This is because
> programs s
Hello,
On Wed, Aug 17, 2016 at 04:00:45PM +0200, Daniel Mack wrote:
> @@ -5461,6 +5462,14 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
> for_each_css(css, ssid, cgrp)
> kill_css(css);
>
> +#ifdef CONFIG_CGROUP_BPF
> + if (cgrp->bpf_ingress)
> + bpf
us the room to implement the real "in" test
> > if ever necessary in the future.
>
> agree. Thanks for explaining 'in' vs 'under' terminology.
> since we can still rename skb_in_cgroup we should do it.
Sounds good to me.
> and since that was m
On Fri, Aug 12, 2016 at 09:21:39AM -0400, Tejun Heo wrote:
> On Thu, Aug 11, 2016 at 09:50:48PM -0700, Sargun Dhillon wrote:
> > I realize that in_cgroup is more consistent, but under_cgroup makes
> > far more sense to me. I think it's more intuitive.
>
> So, I think
Hello,
On Fri, Aug 12, 2016 at 09:40:39AM +0200, Daniel Borkmann wrote:
> > I actually wish we could rename skb_in_cgroup to skb_under_cgroup. If we
> > ever
> > introduced a check for absolute membership versus ancestral membership, what
> > would we call that?
>
> That option is, by the way, s
On Thu, Aug 11, 2016 at 09:50:48PM -0700, Sargun Dhillon wrote:
> I realize that in_cgroup is more consistent, but under_cgroup makes
> far more sense to me. I think it's more intuitive.
So, I think in_cgroup should mean that the object is in that
particular cgroup while under_cgroup in the subhie
ed
> this always returns true.
>
> Signed-off-by: Sargun Dhillon
> Cc: Alexei Starovoitov
> Cc: Daniel Borkmann
> Cc: Tejun Heo
Acked-by: Tejun Heo
Please feel free to route with other patches. If it'd be better to
route this through the cgroup tree, please let me know.
Thanks.
--
tejun
Hello, Sargun.
On Thu, Aug 11, 2016 at 11:51:42AM -0700, Sargun Dhillon wrote:
> This adds a bpf helper that's similar to the skb_in_cgroup helper to check
> whether the probe is currently executing in the context of a specific
> subset of the cgroupsv2 hierarchy. It does this based on membership
Hello, Leon.
On Sun, Jul 31, 2016 at 09:35:13AM +0300, Leon Romanovsky wrote:
> > The conversion uses WQ_MEM_RECLAIM, which is standard for all
> > workqueues which can stall packet processing if stalled. The
> > requirement comes from nfs or block devices over network.
>
> The title stays "remo
Hello,
On Fri, Jul 29, 2016 at 01:30:05AM +0300, Saeed Mahameed wrote:
> > Are the workitems being used on a memory reclaim path?
>
> do you mean they need to allocate memory ?
It's a bit convoluted. A workqueue needs WQ_MEM_RECLAIM flag to be
guaranteed forward progress under memory pressure,
Hello,
On Thu, Jul 28, 2016 at 12:37:35PM +0300, Leon Romanovsky wrote:
> Did you test this patch? Did you notice the memory reclaim path nature
> of this work?
The conversion uses WQ_MEM_RECLAIM, which is standard for all
workqueues which can stall packet processing if stalled. The
requirement
ped
> since destroy_workqueue() itself calls drain_workqueue() which flushes
> repeatedly till the workqueue becomes empty.
>
> Signed-off-by: Bhaktipriya Shridhar
Acked-by: Tejun Heo
Thanks.
--
tejun
Hello,
On Thu, Jun 23, 2016 at 11:42:31AM +0200, Daniel Borkmann wrote:
> I presume it's a valid use case to pin a cgroup map, put fds into it and
> remove the pinned file expecting to continue to match on it, right? So
> lifetime is really until last prog using a cgroup map somewhere gets removed
Starovoitov
> Cc: Daniel Borkmann
> Cc: Tejun Heo
Acked-by: Tejun Heo
Please feel free to route this patch with the rest of the series. If
it's preferable to apply this to the cgroup branch, please let me
know.
Thanks!
--
tejun
Hello, Martin.
On Tue, Jun 21, 2016 at 05:23:19PM -0700, Martin KaFai Lau wrote:
> @@ -6205,6 +6206,31 @@ struct cgroup *cgroup_get_from_path(const char *path)
> }
> EXPORT_SYMBOL_GPL(cgroup_get_from_path);
Proper function comment would be nice.
> +struct cgroup *cgroup_get_from_fd(int fd)
> +
Hello,
On Fri, Jun 10, 2016 at 08:40:34AM +0200, Daniel Wagner wrote:
> > [ Cc'ing John, Daniel, et al ]
> >
> > Btw, while I just looked at scm_detach_fds(), I think commits ...
> >
> > * 48a87cc26c13 ("net: netprio: fd passed in SCM_RIGHTS datagram not set
> > correctly")
> > * d84295067fc7
f-by: Bhaktipriya Shridhar
Acked-by: Tejun Heo
Thanks.
--
tejun
empty. Hence the calls to flush_workqueue() before
> destroy_workqueue() have been dropped.
>
> Signed-off-by: Bhaktipriya Shridhar
Acked-by: Tejun Heo
Thanks.
--
tejun
ueue() itself calls
> drain_workqueue() which flushes repeatedly till the workqueue
> becomes empty. Hence the call to flush_workqueue() has been dropped.
>
> Signed-off-by: Bhaktipriya Shridhar
As with the previous patch, the subject tag doesn't need to contain
the full hierarc
_db");
> + oct->check_db_wq[iq_no].wq = alloc_workqueue("check_iq_db",
> + WQ_MEM_RECLAIM,
> + 0);
Why the new line between WQ_MEM_RECLAIM and 0?
Except for the subj tag and the above nit,
Acked-by: Tejun Heo
Thanks.
--
tejun
On Thu, Jun 02, 2016 at 03:00:57PM +0530, Bhaktipriya Shridhar wrote:
> alloc_workqueue replaces deprecated create_workqueue().
>
> The workqueue adapter->txrx_wq has workitem
> &adapter->raise_intr_rxdata_task per adapter. Extended Socket Network
> Device is shared memory based, so someone's tran
flushes repeatedly till the workqueue
> becomes empty. Hence the call to flush_workqueue() has been dropped.
>
> Signed-off-by: Bhaktipriya Shridhar
Acked-by: Tejun Heo
Thanks.
--
tejun
>
> Signed-off-by: Bhaktipriya Shridhar
Acked-by: Tejun Heo
Thanks.
--
tejun
>
> Signed-off-by: Bhaktipriya Shridhar
Acked-by: Tejun Heo
Thanks.
--
tejun
Hello,
On Thu, May 26, 2016 at 11:19:06AM +0200, Vlastimil Babka wrote:
> > if (is_atomic) {
> > margin = 3;
> >
> > if (chunk->map_alloc <
> > - chunk->map_used + PCPU_ATOMIC_MAP_MARGIN_LOW &&
> > - pcpu_async_enabled)
> > -
tomic allocations
under pcpu_alloc_mutex to synchronize against pcpu_balance_work which
is responsible for async chunk management including destruction.
Signed-off-by: Tejun Heo
Reported-and-tested-by: Alexei Starovoitov
Reported-by: Vlastimil Babka
Reported-by: Sasha Levin
Cc
l in flight.
This patch fixes the bug by rolling async map extension operations
into pcpu_balance_work.
Signed-off-by: Tejun Heo
Reported-and-tested-by: Alexei Starovoitov
Reported-by: Vlastimil Babka
Reported-by: Sasha Levin
Cc: sta...@vger.kernel.org # v3.18+
Fixes: 9c824b6a172c ("percpu:
Hello,
Alexei, can you please verify this patch? Map extension got rolled
into balance work so that there's no sync issues between the two async
operations.
Thanks.
Index: work/mm/percpu.c
===
--- work.orig/mm/percpu.c
+++ work/mm/
Hello,
On Tue, May 24, 2016 at 10:40:54AM +0200, Vlastimil Babka wrote:
> [+CC Marco who reported the CVE, forgot that earlier]
>
> On 05/23/2016 11:35 PM, Tejun Heo wrote:
> > Hello,
> >
> > Can you please test whether this patch resolves the issue? While
&g
Hello,
Can you please test whether this patch resolves the issue? While
adding support for atomic allocations, I reduced alloc_mutex covered
region too much.
Thanks.
diff --git a/mm/percpu.c b/mm/percpu.c
index 0c59684..bd2df70 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -162,7 +162,7 @@ stat
Hello, Manish.
On Thu, Apr 14, 2016 at 07:25:15AM +, Manish Chopra wrote:
> Just want to confirm that __WQ_LEGACY flag is not necessary here as this is
> removed
> with this change ?
Yeah, that should be fine. That only affects locking dependency
tracking which can fire spuriously due to w
.
>
> Signed-off-by: Amitoj Kaur Chawla
> Acked-by: Tejun Heo
Ping?
--
tejun
Hello,
On Fri, Mar 18, 2016 at 05:24:05PM -0400, J. Bruce Fields wrote:
> > But does that actually work? It's pointless to add WQ_MEM_RECLAIM to
> > workqueues unless all other things are also guaranteed to make forward
> > progress regardless of memory pressure.
>
> It's supposed to work.
>
>
Hello, Jiri.
On Thu, Mar 17, 2016 at 01:00:13PM +0100, Jiri Slaby wrote:
> >> I have not done that yet, but today, I see:
> >> destroy_workqueue: name='req_hci0' pwq=88002f590300
> >> wq->dfl_pwq=88002f591e00 pwq->refcnt=2 pwq->nr_active=0 delayed_works:
> >>pwq 12: cpus=0-1 node=0 fla
Hello,
On Thu, Mar 17, 2016 at 01:43:22PM +0100, Johannes Berg wrote:
> On Thu, 2016-03-17 at 20:37 +0800, Eva Rachel Retuya wrote:
> > Use alloc_workqueue() to allocate the workqueue instead of
> > create_singlethread_workqueue() since the latter is deprecated and is
> > scheduled for removal.
>
Hello,
Years ago, workqueue got reimplemented to use common worker pools
across different workqueues and a new set of more expressive workqueue
creation APIs, alloc_*workqueue() were introduced. The old
create_*workqueue() became simple wrappers around alloc_*workqueue()
with the most conservativ
Hello, Jeff.
On Thu, Mar 17, 2016 at 09:32:16PM -0400, Jeff Layton wrote:
> > * Are network devices expected to be able to serve as a part of
> > storage stack which is depended upon for memory reclamation?
>
> I think they should be. Cached NFS pages can consume a lot of memory,
> and flushing
.
I think "not depended during memory reclaim" probalby is a better way
to describe it.
> Signed-off-by: Eva Rachel Retuya
But other than that,
Acked-by: Tejun Heo
Thanks.
--
tejun
Hello,
Sorry about the delay.
On Thu, Mar 03, 2016 at 10:12:01AM +0100, Jiri Slaby wrote:
> On 03/02/2016, 04:45 PM, Tejun Heo wrote:
> > On Fri, Feb 19, 2016 at 01:10:00PM +0100, Jiri Slaby wrote:
> >>> 1. didn't help, the problem persists. So I haven't
1 - 100 of 251 matches
Mail list logo