On 4/19/19 1:40 AM, Ingo Molnar wrote:
* Subhra Mazumdar wrote:
I see similar improvement with this patch as removing the condition I
earlier mentioned. So that's not needed. I also included the patch for the
priority fix. For 2 DB instances, HT disabling stands at -22% for 32 users
(from
* Subhra Mazumdar wrote:
> I see similar improvement with this patch as removing the condition I
> earlier mentioned. So that's not needed. I also included the patch for the
> priority fix. For 2 DB instances, HT disabling stands at -22% for 32 users
> (from earlier emails).
>
>
> 1 DB
On Tue, Apr 02, 2019 at 10:28:12AM +0200, Peter Zijlstra wrote:
> On Tue, Apr 02, 2019 at 02:46:13PM +0800, Aaron Lu wrote:
...
> > Perhaps we can test if max is on the same cpu as class_pick and then
> > use cpu_prio_less() or core_prio_less() accordingly here, or just
> > replace
On 10-Apr-2019 10:06:30 AM, Peter Zijlstra wrote:
> while you're all having fun playing with this, I've not yet had answers
> to the important questions of how L1TF complete we want to be and if all
> this crud actually matters one way or the other.
>
> Also, I still don't see this stuff working
On Thu, Apr 11, 2019 at 11:05:41AM +0800, Aaron Lu wrote:
> On Wed, Apr 10, 2019 at 04:44:18PM +0200, Peter Zijlstra wrote:
> > When core_cookie==0 we shouldn't schedule the other siblings at all.
>
> Not even with another untagged task?
>
> I was thinking to leave host side tasks untagged, like
On Wed, Apr 10, 2019 at 04:44:18PM +0200, Peter Zijlstra wrote:
> On Wed, Apr 10, 2019 at 12:36:33PM +0800, Aaron Lu wrote:
> > On Tue, Apr 09, 2019 at 11:09:45AM -0700, Tim Chen wrote:
> > > Now that we have accumulated quite a number of different fixes to your
> > > orginal
> > > posted
On Wed, Apr 10, 2019 at 10:18:10PM +0800, Aubrey Li wrote:
> On Wed, Apr 10, 2019 at 12:36 PM Aaron Lu wrote:
> >
> > On Tue, Apr 09, 2019 at 11:09:45AM -0700, Tim Chen wrote:
> > > Now that we have accumulated quite a number of different fixes to your
> > > orginal
> > > posted patches. Would
On 4/9/19 11:38 AM, Julien Desfossez wrote:
We found the source of the major performance regression we discussed
previously. It turns out there was a pattern where a task (a kworker in this
case) could be woken up, but the core could still end up idle before that
task had a chance to run.
From: Vineeth Pillai
> Well, I was promised someome else was going to carry all this, also
We are interested in this feature and have been actively testing, benchmarking
and working on fixes. If there is no v2 effort currently in progress, we are
willing to help consolidate all the changes
On Tue, Apr 09, 2019 at 02:38:55PM -0400, Julien Desfossez wrote:
> We found the source of the major performance regression we discussed
> previously. It turns out there was a pattern where a task (a kworker in this
> case) could be woken up, but the core could still end up idle before that
> task
On Wed, Apr 10, 2019 at 12:36:33PM +0800, Aaron Lu wrote:
> On Tue, Apr 09, 2019 at 11:09:45AM -0700, Tim Chen wrote:
> > Now that we have accumulated quite a number of different fixes to your
> > orginal
> > posted patches. Would you like to post a v2 of the core scheduler with the
> > fixes?
On Wed, Apr 10, 2019 at 12:36 PM Aaron Lu wrote:
>
> On Tue, Apr 09, 2019 at 11:09:45AM -0700, Tim Chen wrote:
> > Now that we have accumulated quite a number of different fixes to your
> > orginal
> > posted patches. Would you like to post a v2 of the core scheduler with the
> > fixes?
>
>
On Tue, Apr 09, 2019 at 11:09:45AM -0700, Tim Chen wrote:
> Now that we have accumulated quite a number of different fixes to your orginal
> posted patches. Would you like to post a v2 of the core scheduler with the
> fixes?
Well, I was promised someome else was going to carry all this, also,
On Tue, Apr 09, 2019 at 11:09:45AM -0700, Tim Chen wrote:
> Now that we have accumulated quite a number of different fixes to your orginal
> posted patches. Would you like to post a v2 of the core scheduler with the
> fixes?
One more question I'm not sure: should a task with cookie=0, i.e.
We found the source of the major performance regression we discussed
previously. It turns out there was a pattern where a task (a kworker in this
case) could be woken up, but the core could still end up idle before that
task had a chance to run.
Example sequence, cpu0 and cpu1 and siblings on the
On 4/5/19 7:55 AM, Aaron Lu wrote:
> On Tue, Apr 02, 2019 at 10:28:12AM +0200, Peter Zijlstra wrote:
>> Another approach would be something like the below:
>>
>>
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -87,7 +87,7 @@ static inline int __task_prio(struct tas
>> */
>>
>>
On Tue, Apr 02, 2019 at 10:28:12AM +0200, Peter Zijlstra wrote:
> Another approach would be something like the below:
>
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -87,7 +87,7 @@ static inline int __task_prio(struct tas
> */
>
> /* real prio, less is less */
> -static
On Tue, Apr 02, 2019 at 10:28:12AM +0200, Peter Zijlstra wrote:
> Another approach would be something like the below:
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -87,7 +87,7 @@ static inline int __task_prio(struct tas
> */
>
> /* real prio, less is less */
> -static inline
On Tue, Apr 02, 2019 at 02:46:13PM +0800, Aaron Lu wrote:
> On Mon, Feb 18, 2019 at 05:56:33PM +0100, Peter Zijlstra wrote:
> > +static struct task_struct *
> > +pick_task(struct rq *rq, const struct sched_class *class, struct
> > task_struct *max)
> > +{
> > + struct task_struct *class_pick,
Instead of only selecting a local task, select a task for all SMT
siblings for every reschedule on the core (irrespective which logical
CPU does the reschedule).
NOTE: there is still potential for siblings rivalry.
NOTE: this is far too complicated; but thus far I've failed to
simplify it
20 matches
Mail list logo