On 24/05/18 10:04, Patrick Bellasi wrote:
[...]
> From 84bb8137ce79f74849d97e30871cf67d06d8d682 Mon Sep 17 00:00:00 2001
> From: Patrick Bellasi
> Date: Wed, 23 May 2018 16:33:06 +0100
> Subject: [PATCH 1/1] cgroup/cpuset: disable sched domain rebuild when not
>
On 24/05/18 10:04, Patrick Bellasi wrote:
[...]
> From 84bb8137ce79f74849d97e30871cf67d06d8d682 Mon Sep 17 00:00:00 2001
> From: Patrick Bellasi
> Date: Wed, 23 May 2018 16:33:06 +0100
> Subject: [PATCH 1/1] cgroup/cpuset: disable sched domain rebuild when not
> required
>
> The
On 17/05/18 16:55, Waiman Long wrote:
[...]
> @@ -849,7 +860,12 @@ static void rebuild_sched_domains_locked(void)
>* passing doms with offlined cpu to partition_sched_domains().
>* Anyways, hotplug work item will rebuild sched domains.
>*/
> - if
On 17/05/18 16:55, Waiman Long wrote:
[...]
> @@ -849,7 +860,12 @@ static void rebuild_sched_domains_locked(void)
>* passing doms with offlined cpu to partition_sched_domains().
>* Anyways, hotplug work item will rebuild sched domains.
>*/
> - if
el.com>
I don't have a platform at hand where to test this. But, it looks OK to
me.
Reviewed-by: Juri Lelli <juri.le...@redhat.com>
Best,
- Juri
.
>
> To close that window, rearrange the code so as to acquire the update
> lock around the deferred update branch in sugov_update_single()
> and drop the work_in_progress check from it.
>
> Signed-off-by: Rafael J. Wysocki
I don't have a platform at hand where to test this. But
jlstra <pet...@infradead.org>
> CC: Ingo Molnar <mi...@redhat.com>
> CC: Patrick Bellasi <patrick.bell...@arm.com>
> CC: Juri Lelli <juri.le...@redhat.com>
> Cc: Luca Abeni <luca.ab...@santannapisa.it>
> CC: Todd Kjos <tk...@google.com>
> CC: clau...@ev
olicy->next_freq = 0;
> freq = sg_policy->next_freq;
>sg_policy->next_freq =
> real-freq;
> unlock();
>
> Reported-by: Viresh Kumar
> CC: Rafael J. Wysocki
> CC: Peter Zijlstra
> CC: Ingo Molnar
> CC: Patrick Bellasi
> CC: Juri Le
Hi,
On 17/05/18 16:55, Waiman Long wrote:
> This patch enables us to report sched domain generation information.
>
> If DYNAMIC_DEBUG is enabled, issuing the following command
>
> echo "file cpuset.c +p" > /sys/kernel/debug/dynamic_debug/control
>
> and setting loglevel to 8 will allow the
Hi,
On 17/05/18 16:55, Waiman Long wrote:
> This patch enables us to report sched domain generation information.
>
> If DYNAMIC_DEBUG is enabled, issuing the following command
>
> echo "file cpuset.c +p" > /sys/kernel/debug/dynamic_debug/control
>
> and setting loglevel to 8 will allow the
Hi,
On 17/05/18 16:55, Waiman Long wrote:
[...]
> /**
> + * update_isolated_cpumask - update the isolated_cpus mask of parent cpuset
> + * @cpuset: The cpuset that requests CPU isolation
> + * @oldmask: The old isolated cpumask to be removed from the parent
> + * @newmask: The new isolated
Hi,
On 17/05/18 16:55, Waiman Long wrote:
[...]
> /**
> + * update_isolated_cpumask - update the isolated_cpus mask of parent cpuset
> + * @cpuset: The cpuset that requests CPU isolation
> + * @oldmask: The old isolated cpumask to be removed from the parent
> + * @newmask: The new isolated
isn't an RFC anymore, you shouldn't have added below
> > paragraph here. It could go to the comments section though.
> >
> >> I had brought up this issue at the OSPM conference and Claudio had a
> >> discussion RFC with an alternate approach [1]. I prefer the approach as
> >
agraph here. It could go to the comments section though.
> >
> >> I had brought up this issue at the OSPM conference and Claudio had a
> >> discussion RFC with an alternate approach [1]. I prefer the approach as
> >> done in the patch below since it doesn't need a
On 17/05/18 07:43, Joel Fernandes wrote:
> On Thu, May 17, 2018 at 04:28:23PM +0200, Juri Lelli wrote:
> [...]
> > > > > We would need more locking stuff in the work handler in that case and
> > > > > I think there maybe a chance of missing the request in th
On 17/05/18 07:43, Joel Fernandes wrote:
> On Thu, May 17, 2018 at 04:28:23PM +0200, Juri Lelli wrote:
> [...]
> > > > > We would need more locking stuff in the work handler in that case and
> > > > > I think there maybe a chance of missing the request in th
On 17/05/18 12:59, Juri Lelli wrote:
> On 16/05/18 18:31, Juri Lelli wrote:
> > On 16/05/18 17:47, Peter Zijlstra wrote:
> > > On Wed, May 16, 2018 at 05:19:25PM +0200, Juri Lelli wrote:
> > >
> > > > Anyway, FWIW I started testing this on a E5-2609
On 17/05/18 12:59, Juri Lelli wrote:
> On 16/05/18 18:31, Juri Lelli wrote:
> > On 16/05/18 17:47, Peter Zijlstra wrote:
> > > On Wed, May 16, 2018 at 05:19:25PM +0200, Juri Lelli wrote:
> > >
> > > > Anyway, FWIW I started testing this on a E5-2609
On 17/05/18 06:07, Joel Fernandes wrote:
> On Thu, May 17, 2018 at 12:53:58PM +0200, Juri Lelli wrote:
> > On 17/05/18 15:50, Viresh Kumar wrote:
> > > On 17-05-18, 09:00, Juri Lelli wrote:
> > > > Hi Joel,
> > > >
> > >
On 17/05/18 06:07, Joel Fernandes wrote:
> On Thu, May 17, 2018 at 12:53:58PM +0200, Juri Lelli wrote:
> > On 17/05/18 15:50, Viresh Kumar wrote:
> > > On 17-05-18, 09:00, Juri Lelli wrote:
> > > > Hi Joel,
> > > >
> > >
On 16/05/18 18:31, Juri Lelli wrote:
> On 16/05/18 17:47, Peter Zijlstra wrote:
> > On Wed, May 16, 2018 at 05:19:25PM +0200, Juri Lelli wrote:
> >
> > > Anyway, FWIW I started testing this on a E5-2609 v3 and I'm not seeing
> > > hackbench regressions so far
On 16/05/18 18:31, Juri Lelli wrote:
> On 16/05/18 17:47, Peter Zijlstra wrote:
> > On Wed, May 16, 2018 at 05:19:25PM +0200, Juri Lelli wrote:
> >
> > > Anyway, FWIW I started testing this on a E5-2609 v3 and I'm not seeing
> > > hackbench regressions so far
On 17/05/18 15:50, Viresh Kumar wrote:
> On 17-05-18, 09:00, Juri Lelli wrote:
> > Hi Joel,
> >
> > On 16/05/18 15:45, Joel Fernandes (Google) wrote:
> >
> > [...]
> >
> > > @@ -382,13 +391,24 @@ sugov_update_shared(struct update_util
On 17/05/18 15:50, Viresh Kumar wrote:
> On 17-05-18, 09:00, Juri Lelli wrote:
> > Hi Joel,
> >
> > On 16/05/18 15:45, Joel Fernandes (Google) wrote:
> >
> > [...]
> >
> > > @@ -382,13 +391,24 @@ sugov_update_shared(struct update_util
Hi Joel,
On 16/05/18 15:45, Joel Fernandes (Google) wrote:
[...]
> @@ -382,13 +391,24 @@ sugov_update_shared(struct update_util_data *hook, u64
> time, unsigned int flags)
> static void sugov_work(struct kthread_work *work)
> {
> struct sugov_policy *sg_policy = container_of(work,
Hi Joel,
On 16/05/18 15:45, Joel Fernandes (Google) wrote:
[...]
> @@ -382,13 +391,24 @@ sugov_update_shared(struct update_util_data *hook, u64
> time, unsigned int flags)
> static void sugov_work(struct kthread_work *work)
> {
> struct sugov_policy *sg_policy = container_of(work,
On 16/05/18 17:47, Peter Zijlstra wrote:
> On Wed, May 16, 2018 at 05:19:25PM +0200, Juri Lelli wrote:
>
> > Anyway, FWIW I started testing this on a E5-2609 v3 and I'm not seeing
> > hackbench regressions so far (running with schedutil governor).
>
> https://en.wik
On 16/05/18 17:47, Peter Zijlstra wrote:
> On Wed, May 16, 2018 at 05:19:25PM +0200, Juri Lelli wrote:
>
> > Anyway, FWIW I started testing this on a E5-2609 v3 and I'm not seeing
> > hackbench regressions so far (running with schedutil governor).
>
> https://en.wik
On 15/05/18 21:49, Srinivas Pandruvada wrote:
> intel_pstate has two operating modes: active and passive. In "active"
> mode, the in-built scaling governor is used and in "passive" mode,
> the driver can be used with any governor like "schedutil". In "active"
> mode the utilization values from
On 15/05/18 21:49, Srinivas Pandruvada wrote:
> intel_pstate has two operating modes: active and passive. In "active"
> mode, the in-built scaling governor is used and in "passive" mode,
> the driver can be used with any governor like "schedutil". In "active"
> mode the utilization values from
Hi Srinivas,
On 15/05/18 21:49, Srinivas Pandruvada wrote:
[...]
>
> Peter Zijlstra (1):
> x86,sched: Add support for frequency invariance
Cool! I was going to ask Peter about this patch. You beat me to it. :)
I'll have a lokk at the set. BTW, just noticed that you Cc-ed me using
my old
Hi Srinivas,
On 15/05/18 21:49, Srinivas Pandruvada wrote:
[...]
>
> Peter Zijlstra (1):
> x86,sched: Add support for frequency invariance
Cool! I was going to ask Peter about this patch. You beat me to it. :)
I'll have a lokk at the set. BTW, just noticed that you Cc-ed me using
my old
On 09/05/18 10:25, Rafael J. Wysocki wrote:
> On Wed, May 9, 2018 at 10:23 AM, Juri Lelli <juri.le...@redhat.com> wrote:
> > On 09/05/18 10:05, Rafael J. Wysocki wrote:
> >> On Wed, May 9, 2018 at 9:01 AM, Joel Fernandes <j...@joelfernandes.org>
> >> wrot
On 09/05/18 10:25, Rafael J. Wysocki wrote:
> On Wed, May 9, 2018 at 10:23 AM, Juri Lelli wrote:
> > On 09/05/18 10:05, Rafael J. Wysocki wrote:
> >> On Wed, May 9, 2018 at 9:01 AM, Joel Fernandes
> >> wrote:
> >> > On Wed, May 09, 2018 at 12:24:49PM +053
ning it.
Signed-off-by: Juri Lelli <juri.le...@redhat.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: "Rafael J. Wysocki" <rafael.j.wyso...@intel.com>
Cc: Viresh Kumar <viresh.ku...@linaro.org>
Cc: Claudio Scordino <clau
ning it.
Signed-off-by: Juri Lelli
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: "Rafael J. Wysocki"
Cc: Viresh Kumar
Cc: Claudio Scordino
Cc: Luca Abeni
---
kernel/sched/cpufreq_schedutil.c | 13 -
1 file changed, 13 deletions(-)
diff --git a/kernel/sched/cpufreq_schedutil.c b/ke
On 09/05/18 10:05, Rafael J. Wysocki wrote:
> On Wed, May 9, 2018 at 9:01 AM, Joel Fernandes <j...@joelfernandes.org> wrote:
> > On Wed, May 09, 2018 at 12:24:49PM +0530, Viresh Kumar wrote:
> >> On 09-05-18, 08:45, Juri Lelli wrote:
> >> > On 08/05/18 21:54
On 09/05/18 10:05, Rafael J. Wysocki wrote:
> On Wed, May 9, 2018 at 9:01 AM, Joel Fernandes wrote:
> > On Wed, May 09, 2018 at 12:24:49PM +0530, Viresh Kumar wrote:
> >> On 09-05-18, 08:45, Juri Lelli wrote:
> >> > On 08/05/18 21:54, Joel Fernandes wrote:
On 08/05/18 21:54, Joel Fernandes wrote:
[...]
> Just for discussion sake, is there any need for work_in_progress? If we can
> queue multiple work say kthread_queue_work can handle it, then just queuing
> works whenever they are available should be Ok and the kthread loop can
> handle them.
On 08/05/18 21:54, Joel Fernandes wrote:
[...]
> Just for discussion sake, is there any need for work_in_progress? If we can
> queue multiple work say kthread_queue_work can handle it, then just queuing
> works whenever they are available should be Ok and the kthread loop can
> handle them.
P-state selection algorithm (powersave or performance) is selected by
echoing the desired choice to scaling_governor sysfs attribute and not
to scaling_cur_freq (as currently stated).
Fix it.
Signed-off-by: Juri Lelli <juri.le...@redhat.com>
Cc: Jonathan Corbet <cor...@lwn.net>
C
P-state selection algorithm (powersave or performance) is selected by
echoing the desired choice to scaling_governor sysfs attribute and not
to scaling_cur_freq (as currently stated).
Fix it.
Signed-off-by: Juri Lelli
Cc: Jonathan Corbet
Cc: "Rafael J. Wysocki"
Cc: Srinivas Pand
On 08/05/18 12:24, Quentin Perret wrote:
> On Tuesday 08 May 2018 at 16:44:51 (+0530), Viresh Kumar wrote:
> > On 08-05-18, 12:00, Quentin Perret wrote:
> > > Right, I see your point. Now, with the current implementation, why should
> > > we randomly force a CPU to manage the kthread of another ?
On 08/05/18 12:24, Quentin Perret wrote:
> On Tuesday 08 May 2018 at 16:44:51 (+0530), Viresh Kumar wrote:
> > On 08-05-18, 12:00, Quentin Perret wrote:
> > > Right, I see your point. Now, with the current implementation, why should
> > > we randomly force a CPU to manage the kthread of another ?
On 08/05/18 16:23, Viresh Kumar wrote:
> On 08-05-18, 12:36, Dietmar Eggemann wrote:
> > That's true but where is the benefit by doing so? (Multiple) per-cluster or
> > per-cpu frequency domains, why should the sugov kthread run on a foreign
> > cpu?
>
> I am not sure I know the answer, but I
On 08/05/18 16:23, Viresh Kumar wrote:
> On 08-05-18, 12:36, Dietmar Eggemann wrote:
> > That's true but where is the benefit by doing so? (Multiple) per-cluster or
> > per-cpu frequency domains, why should the sugov kthread run on a foreign
> > cpu?
>
> I am not sure I know the answer, but I
On 19/04/18 09:47, Waiman Long wrote:
[...]
> + cpuset.cpus.isolated
> + A read-write multiple values file which exists on root cgroup
> + only.
> +
> + It lists the CPUs that have been withdrawn from the root cgroup
> + for load balancing. These CPUs can still be allocated to
On 19/04/18 09:47, Waiman Long wrote:
[...]
> + cpuset.cpus.isolated
> + A read-write multiple values file which exists on root cgroup
> + only.
> +
> + It lists the CPUs that have been withdrawn from the root cgroup
> + for load balancing. These CPUs can still be allocated to
On 23/04/18 15:07, Juri Lelli wrote:
> Hi Waiman,
>
> On 19/04/18 09:46, Waiman Long wrote:
> > v7:
> > - Add a root-only cpuset.cpus.isolated control file for CPU isolation.
> > - Enforce that load_balancing can only be turned off on cpusets with
> &g
On 23/04/18 15:07, Juri Lelli wrote:
> Hi Waiman,
>
> On 19/04/18 09:46, Waiman Long wrote:
> > v7:
> > - Add a root-only cpuset.cpus.isolated control file for CPU isolation.
> > - Enforce that load_balancing can only be turned off on cpusets with
> &g
Hi Waiman,
On 19/04/18 09:46, Waiman Long wrote:
> v7:
> - Add a root-only cpuset.cpus.isolated control file for CPU isolation.
> - Enforce that load_balancing can only be turned off on cpusets with
>CPUs from the isolated list.
> - Update sched domain generation to allow cpusets with CPUs
Hi Waiman,
On 19/04/18 09:46, Waiman Long wrote:
> v7:
> - Add a root-only cpuset.cpus.isolated control file for CPU isolation.
> - Enforce that load_balancing can only be turned off on cpusets with
>CPUs from the isolated list.
> - Update sched domain generation to allow cpusets with CPUs
On 20/04/18 17:30, Kirill Tkhai wrote:
> On 20.04.2018 17:11, Juri Lelli wrote:
> > On 20/04/18 13:06, Kirill Tkhai wrote:
> >> From: Kirill Tkhai <ktk...@virtuozzo.com>
> >>
> >> tg_rt_schedulable() iterates over all child task groups,
> >> whi
On 20/04/18 17:30, Kirill Tkhai wrote:
> On 20.04.2018 17:11, Juri Lelli wrote:
> > On 20/04/18 13:06, Kirill Tkhai wrote:
> >> From: Kirill Tkhai
> >>
> >> tg_rt_schedulable() iterates over all child task groups,
> >> while tg_has_rt_tasks()
On 20/04/18 13:06, Kirill Tkhai wrote:
> From: Kirill Tkhai
>
> tg_rt_schedulable() iterates over all child task groups,
> while tg_has_rt_tasks() iterates over all linked tasks.
> In case of systems with big number of tasks, this may
> take a lot of time.
>
> I observed
On 20/04/18 13:06, Kirill Tkhai wrote:
> From: Kirill Tkhai
>
> tg_rt_schedulable() iterates over all child task groups,
> while tg_has_rt_tasks() iterates over all linked tasks.
> In case of systems with big number of tasks, this may
> take a lot of time.
>
> I observed hard LOCKUP on machine
On 20/04/18 12:43, Kirill Tkhai wrote:
> On 20.04.2018 12:25, Juri Lelli wrote:
[...]
> > Isn't this however checking against the current (dynamic) number of
> > runnable tasks/groups instead of the "static" group membership (which
> > shouldn't be affected by a ta
On 20/04/18 12:43, Kirill Tkhai wrote:
> On 20.04.2018 12:25, Juri Lelli wrote:
[...]
> > Isn't this however checking against the current (dynamic) number of
> > runnable tasks/groups instead of the "static" group membership (which
> > shouldn't be affected by a ta
Hi Kirill,
On 19/04/18 20:29, Kirill Tkhai wrote:
> tg_rt_schedulable() iterates over all child task groups,
> while tg_has_rt_tasks() iterates over all linked tasks.
> In case of systems with big number of tasks, this may
> take a lot of time.
>
> I observed hard LOCKUP on machine with 2+
Hi Kirill,
On 19/04/18 20:29, Kirill Tkhai wrote:
> tg_rt_schedulable() iterates over all child task groups,
> while tg_has_rt_tasks() iterates over all linked tasks.
> In case of systems with big number of tasks, this may
> take a lot of time.
>
> I observed hard LOCKUP on machine with 2+
On 20/04/18 09:31, Quentin Perret wrote:
> On Friday 20 Apr 2018 at 01:14:35 (-0700), Joel Fernandes wrote:
> > On Fri, Apr 20, 2018 at 1:13 AM, Joel Fernandes wrote:
> > > On Wed, Apr 18, 2018 at 4:17 AM, Quentin Perret
> > > wrote:
> > >> On Friday
On 20/04/18 09:31, Quentin Perret wrote:
> On Friday 20 Apr 2018 at 01:14:35 (-0700), Joel Fernandes wrote:
> > On Fri, Apr 20, 2018 at 1:13 AM, Joel Fernandes wrote:
> > > On Wed, Apr 18, 2018 at 4:17 AM, Quentin Perret
> > > wrote:
> > >> On Friday 13 Apr 2018 at 16:56:39 (-0700), Joel
On 27/03/18 14:31, Daniel Lezcano wrote:
> On 27/03/2018 14:28, Juri Lelli wrote:
> > Hi Daniel,
> >
> > On 27/03/18 12:26, Daniel Lezcano wrote:
> >> On 27/03/2018 04:03, Leo Yan wrote:
> >>> Hi Daniel,
> >>>
> >>> On Wed, Feb
On 27/03/18 14:31, Daniel Lezcano wrote:
> On 27/03/2018 14:28, Juri Lelli wrote:
> > Hi Daniel,
> >
> > On 27/03/18 12:26, Daniel Lezcano wrote:
> >> On 27/03/2018 04:03, Leo Yan wrote:
> >>> Hi Daniel,
> >>>
> >>> On Wed, Feb
Hi Daniel,
On 27/03/18 12:26, Daniel Lezcano wrote:
> On 27/03/2018 04:03, Leo Yan wrote:
> > Hi Daniel,
> >
> > On Wed, Feb 21, 2018 at 04:29:27PM +0100, Daniel Lezcano wrote:
> >> The cpu idle cooling driver performs synchronized idle injection across all
> >> cpus belonging to the same
Hi Daniel,
On 27/03/18 12:26, Daniel Lezcano wrote:
> On 27/03/2018 04:03, Leo Yan wrote:
> > Hi Daniel,
> >
> > On Wed, Feb 21, 2018 at 04:29:27PM +0100, Daniel Lezcano wrote:
> >> The cpu idle cooling driver performs synchronized idle injection across all
> >> cpus belonging to the same
On 26/03/18 16:28, Waiman Long wrote:
> On 03/26/2018 08:47 AM, Juri Lelli wrote:
> > On 23/03/18 14:44, Waiman Long wrote:
> >> On 03/23/2018 03:59 AM, Juri Lelli wrote:
> > [...]
> >
> >>> OK, thanks for confirming. Can you tell again howeve
On 26/03/18 16:28, Waiman Long wrote:
> On 03/26/2018 08:47 AM, Juri Lelli wrote:
> > On 23/03/18 14:44, Waiman Long wrote:
> >> On 03/23/2018 03:59 AM, Juri Lelli wrote:
> > [...]
> >
> >>> OK, thanks for confirming. Can you tell again howeve
On 23/03/18 14:44, Waiman Long wrote:
> On 03/23/2018 03:59 AM, Juri Lelli wrote:
[...]
> > OK, thanks for confirming. Can you tell again however why do you think
> > we need to remove sched_load_balance from root level? Won't we end up
> > having tasks put on isolated sets?
On 23/03/18 14:44, Waiman Long wrote:
> On 03/23/2018 03:59 AM, Juri Lelli wrote:
[...]
> > OK, thanks for confirming. Can you tell again however why do you think
> > we need to remove sched_load_balance from root level? Won't we end up
> > having tasks put on isolated sets?
On 24/03/18 00:01, Rafał Miłecki wrote:
> On 23 March 2018 at 15:09, Juri Lelli <juri.le...@gmail.com> wrote:
> > On 23/03/18 14:43, Rafał Miłecki wrote:
> >> Hi,
> >>
> >> On 23 March 2018 at 10:47, Juri Lelli <juri.le...@gmail.com> wrote:
> >
On 24/03/18 00:01, Rafał Miłecki wrote:
> On 23 March 2018 at 15:09, Juri Lelli wrote:
> > On 23/03/18 14:43, Rafał Miłecki wrote:
> >> Hi,
> >>
> >> On 23 March 2018 at 10:47, Juri Lelli wrote:
> >> > I've got a Dell XPS 13 9343/0TM99H (BIOS A1
Hi,
thanks a lot for your reply!
On 23/03/18 14:43, Rafał Miłecki wrote:
> Hi,
>
> On 23 March 2018 at 10:47, Juri Lelli <juri.le...@gmail.com> wrote:
> > I've got a Dell XPS 13 9343/0TM99H (BIOS A15 01/23/2018) mounting a
> > BCM4352 802.11ac (rev 03) wireless card
Hi,
thanks a lot for your reply!
On 23/03/18 14:43, Rafał Miłecki wrote:
> Hi,
>
> On 23 March 2018 at 10:47, Juri Lelli wrote:
> > I've got a Dell XPS 13 9343/0TM99H (BIOS A15 01/23/2018) mounting a
> > BCM4352 802.11ac (rev 03) wireless card and so far I've been u
Hi,
I've got a Dell XPS 13 9343/0TM99H (BIOS A15 01/23/2018) mounting a
BCM4352 802.11ac (rev 03) wireless card and so far I've been using it on
Fedora with broadcom-wl package (which I believe installs Broadcom's STA
driver?). It works good apart from occasional hiccups after suspend.
I'd like
Hi,
I've got a Dell XPS 13 9343/0TM99H (BIOS A15 01/23/2018) mounting a
BCM4352 802.11ac (rev 03) wireless card and so far I've been using it on
Fedora with broadcom-wl package (which I believe installs Broadcom's STA
driver?). It works good apart from occasional hiccups after suspend.
I'd like
On 22/03/18 17:50, Waiman Long wrote:
> On 03/22/2018 04:41 AM, Juri Lelli wrote:
> > On 21/03/18 12:21, Waiman Long wrote:
[...]
> >> + cpuset.sched_load_balance
> >> + A read-write single value file which exists on non-root cgroups.
> >> + The default
On 22/03/18 17:50, Waiman Long wrote:
> On 03/22/2018 04:41 AM, Juri Lelli wrote:
> > On 21/03/18 12:21, Waiman Long wrote:
[...]
> >> + cpuset.sched_load_balance
> >> + A read-write single value file which exists on non-root cgroups.
> >> + The default
Hi Waiman,
On 21/03/18 12:21, Waiman Long wrote:
> The sched_load_balance flag is needed to enable CPU isolation similar
> to what can be done with the "isolcpus" kernel boot parameter.
>
> The sched_load_balance flag implies an implicit !cpu_exclusive as
> it doesn't make sense to have an
Hi Waiman,
On 21/03/18 12:21, Waiman Long wrote:
> The sched_load_balance flag is needed to enable CPU isolation similar
> to what can be done with the "isolcpus" kernel boot parameter.
>
> The sched_load_balance flag implies an implicit !cpu_exclusive as
> it doesn't make sense to have an
On 21/03/18 16:26, Morten Rasmussen wrote:
> On Wed, Mar 21, 2018 at 04:15:13PM +0100, Juri Lelli wrote:
> > On 21/03/18 13:55, Quentin Perret wrote:
> > > On Wednesday 21 Mar 2018 at 13:59:25 (+0100), Juri Lelli wrote:
> > > > On 21/03/18 12:26, Patrick Bellasi wr
On 21/03/18 16:26, Morten Rasmussen wrote:
> On Wed, Mar 21, 2018 at 04:15:13PM +0100, Juri Lelli wrote:
> > On 21/03/18 13:55, Quentin Perret wrote:
> > > On Wednesday 21 Mar 2018 at 13:59:25 (+0100), Juri Lelli wrote:
> > > > On 21/03/18 12:26, Patrick Bellasi wr
On 21/03/18 13:55, Quentin Perret wrote:
> On Wednesday 21 Mar 2018 at 13:59:25 (+0100), Juri Lelli wrote:
> > On 21/03/18 12:26, Patrick Bellasi wrote:
> > > On 21-Mar 10:04, Juri Lelli wrote:
[...]
> > > > > + /*
> > > > > + * As the go
On 21/03/18 13:55, Quentin Perret wrote:
> On Wednesday 21 Mar 2018 at 13:59:25 (+0100), Juri Lelli wrote:
> > On 21/03/18 12:26, Patrick Bellasi wrote:
> > > On 21-Mar 10:04, Juri Lelli wrote:
[...]
> > > > > + /*
> > > > > + * As the go
On 21/03/18 14:26, Quentin Perret wrote:
> On Wednesday 21 Mar 2018 at 12:39:21 (+), Patrick Bellasi wrote:
> > On 20-Mar 09:43, Dietmar Eggemann wrote:
[...]
> >
> > If that's the case then, in the previous function, you can certainly
> > avoid the initialization of *cs and maybe also add
On 21/03/18 14:26, Quentin Perret wrote:
> On Wednesday 21 Mar 2018 at 12:39:21 (+), Patrick Bellasi wrote:
> > On 20-Mar 09:43, Dietmar Eggemann wrote:
[...]
> >
> > If that's the case then, in the previous function, you can certainly
> > avoid the initialization of *cs and maybe also add
On 21/03/18 12:26, Patrick Bellasi wrote:
> On 21-Mar 10:04, Juri Lelli wrote:
> > Hi,
> >
> > On 20/03/18 09:43, Dietmar Eggemann wrote:
> > > From: Quentin Perret <quentin.per...@arm.com>
> > >
> > > In preparation for the definition o
On 21/03/18 12:26, Patrick Bellasi wrote:
> On 21-Mar 10:04, Juri Lelli wrote:
> > Hi,
> >
> > On 20/03/18 09:43, Dietmar Eggemann wrote:
> > > From: Quentin Perret
> > >
> > > In preparation for the definition of an energy-aware wakeup path,
Hi,
On 20/03/18 09:43, Dietmar Eggemann wrote:
> From: Quentin Perret
>
> In preparation for the definition of an energy-aware wakeup path, a
> helper function is provided to estimate the consequence on system energy
> when a specific task wakes-up on a specific CPU.
Hi,
On 20/03/18 09:43, Dietmar Eggemann wrote:
> From: Quentin Perret
>
> In preparation for the definition of an energy-aware wakeup path, a
> helper function is provided to estimate the consequence on system energy
> when a specific task wakes-up on a specific CPU. compute_energy()
>
On 16/02/18 08:25, Christopher Díaz Riveros wrote:
> El vie, 16-02-2018 a las 10:44 +0100, Juri Lelli escribió:
> > On 15/02/18 17:52, Peter Zijlstra wrote:
> > > On Thu, Feb 15, 2018 at 10:43:18AM -0500, Christopher Diaz Riveros
> > > wrote:
> >
> >
On 16/02/18 08:25, Christopher Díaz Riveros wrote:
> El vie, 16-02-2018 a las 10:44 +0100, Juri Lelli escribió:
> > On 15/02/18 17:52, Peter Zijlstra wrote:
> > > On Thu, Feb 15, 2018 at 10:43:18AM -0500, Christopher Diaz Riveros
> > > wrote:
> >
> >
On 15/02/18 17:52, Peter Zijlstra wrote:
> On Thu, Feb 15, 2018 at 10:43:18AM -0500, Christopher Diaz Riveros wrote:
[...]
> > @@ -437,20 +437,28 @@ struct sched_dl_entity {
> > * during sched_setattr(), they will remain the same until
> > * the next sched_setattr().
> > */
> > -
On 15/02/18 17:52, Peter Zijlstra wrote:
> On Thu, Feb 15, 2018 at 10:43:18AM -0500, Christopher Diaz Riveros wrote:
[...]
> > @@ -437,20 +437,28 @@ struct sched_dl_entity {
> > * during sched_setattr(), they will remain the same until
> > * the next sched_setattr().
> > */
> > -
Hi,
On 15/02/18 16:20, Morten Rasmussen wrote:
> From: Valentin Schneider
>
> The name "overload" is not very explicit, especially since it doesn't
> use any concept of "load" coming from load-tracking signals. For now it
> simply tracks if any of the CPUs in
Hi,
On 15/02/18 16:20, Morten Rasmussen wrote:
> From: Valentin Schneider
>
> The name "overload" is not very explicit, especially since it doesn't
> use any concept of "load" coming from load-tracking signals. For now it
> simply tracks if any of the CPUs in root_domain has more than one
>
On 15/02/18 11:33, Juri Lelli wrote:
> On 14/02/18 17:31, Juri Lelli wrote:
>
> [...]
>
> > Still grabbing it is a no-go, as do_sched_setscheduler calls
> > sched_setscheduler from inside an RCU read-side critical section.
>
> I was then actually thinking that
On 15/02/18 11:33, Juri Lelli wrote:
> On 14/02/18 17:31, Juri Lelli wrote:
>
> [...]
>
> > Still grabbing it is a no-go, as do_sched_setscheduler calls
> > sched_setscheduler from inside an RCU read-side critical section.
>
> I was then actually thinking that
On 14/02/18 17:31, Juri Lelli wrote:
[...]
> Still grabbing it is a no-go, as do_sched_setscheduler calls
> sched_setscheduler from inside an RCU read-side critical section.
I was then actually thinking that trylocking might do.. not sure however
if failing with -EBUSY in the contende
On 14/02/18 17:31, Juri Lelli wrote:
[...]
> Still grabbing it is a no-go, as do_sched_setscheduler calls
> sched_setscheduler from inside an RCU read-side critical section.
I was then actually thinking that trylocking might do.. not sure however
if failing with -EBUSY in the contende
501 - 600 of 2448 matches
Mail list logo