Re: [RFC PATCH v2 0/8] Generic IPI sending tracepoint

2022-11-14 Thread Steven Rostedt
On Wed,  2 Nov 2022 18:29:41 +
Valentin Schneider  wrote:

> This is incomplete, just looking at arm64 there's more IPI types that aren't
> covered: 
> 
>   IPI_CPU_STOP,
>   IPI_CPU_CRASH_STOP,
>   IPI_TIMER,
>   IPI_WAKEUP,
> 
> ... But it feels like a good starting point.

For the tracing portions:

Reviewed-by: Steven Rostedt (Google) 

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [RFC PATCH 0/5] Generic IPI sending tracepoint

2022-10-11 Thread Steven Rostedt
On Tue, 11 Oct 2022 17:40:26 +0100
Valentin Schneider  wrote:

> > You could keep the tracepoint as a mask, and then make it pretty, like 
> > cpus=3-5,8
> > in user-space. For example with a trace-cmd/perf loadable plugin, 
> > libtracefs helper.
> >  
> 
> That's a nice idea, the one downside I see is that means registering an
> event handler for all events with cpumasks rather than directly targeting
> cpumask fields, but that doesn't look too horrible. I'll dig a bit in that
> direction.

We could just make all all dynamic array's of unsigned long use that
format? I don't know of any other event that has dynamic arrays of
unsigned longs. And doing a search doesn't come up with any.

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [RFC PATCH 0/5] Generic IPI sending tracepoint

2022-10-11 Thread Steven Rostedt
On Tue, 11 Oct 2022 17:17:04 +0100
Valentin Schneider  wrote:

> tep_get_field_val() just yields an unsigned long long of value 0x200018,
> which AFAICT is just the [length, offset] thing associated with dynamic
> arrays. Not really usable, and I don't see anything exported in the lib to
> extract and use those values.

Correct.

> 
> tep_get_field_raw() is better, it handles the dynamic array for us and
> yields a pointer to the cpumask array at the tail of the record. With that
> it's easy to get an output such as: cpumask[size=32]=[4,0,0,0,]. Still,
> this isn't a native type for many programming languages.

Yeah, this is the interface that I would have used. And it would likely
require some kind of wrapper to make it into what you prefer.

Note, I've been complaining for some time now how much I hate the
libtraceevent interface, and want to rewrite it. (I just spoke to
someone that wants to implement it in Rust). But the question comes
down to how to make it backward compatible. Perhaps we don't and just
up the major version number (libtraceevent 2.0).

What would you recommend as an API to process cpumasks better?

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [RFC PATCH 0/5] Generic IPI sending tracepoint

2022-10-08 Thread Steven Rostedt
On Fri, 7 Oct 2022 17:01:33 -0300
Marcelo Tosatti  wrote:

> > As for the targeted CPUs, the existing tracepoint does export them, albeit 
> > in
> > cpumask form, which is quite inconvenient from a tooling perspective. For
> > instance, as far as I'm aware, it's not possible to do event filtering on a
> > cpumask via trace-cmd.  
> 
> https://man7.org/linux/man-pages/man1/trace-cmd-set.1.html
> 
>-f filter
>Specify a filter for the previous event. This must come after
>a -e. This will filter what events get recorded based on the
>content of the event. Filtering is passed to the kernel
>directly so what filtering is allowed may depend on what
>version of the kernel you have. Basically, it will let you
>use C notation to check if an event should be processed or
>not.
> 
>==, >=, <=, >, <, &, |, && and ||
> 
>The above are usually safe to use to compare fields.

We could always add an "isset(x)" filter ;-)

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [RFC PATCH 4/5] irq_work: Trace calls to arch_irq_work_raise()

2022-10-08 Thread Steven Rostedt
On Fri,  7 Oct 2022 16:45:32 +0100
Valentin Schneider  wrote:
>  }
>  
> +static inline void irq_work_raise(void)
> +{
> + if (arch_irq_work_has_interrupt())
> + trace_ipi_send_cpu(_RET_IP_, smp_processor_id());

To save on the branch, let's make the above:

if (trace_ipi_send_cpu_enabled() && arch_irq_work_has_interrupt())

As the "trace_*_enabled()" is a static branch that will make it a nop
when the tracepoint is not enabled.

-- Steve


> +
> + arch_irq_work_raise();
> +}
> +
>  /* Enqueue on current CPU, work must already be claimed and preempt disabled 
> */

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH v2 33/44] ftrace: WARN on rcuidle

2022-09-26 Thread Steven Rostedt


Nit, the subject should have "tracing:" an not "ftrace:" as the former
encompasses the tracing infrastructure and the latter is for the function
hook part of that.

On Mon, 19 Sep 2022 12:00:12 +0200
Peter Zijlstra  wrote:

> CONFIG_GENERIC_ENTRY disallows any and all tracing when RCU isn't
> enabled.
> 
> XXX if s390 (the only other GENERIC_ENTRY user as of this writing)
> isn't comfortable with this, we could switch to
> HAVE_NOINSTR_VALIDATION which is x86_64 only atm.
> 
> Signed-off-by: Peter Zijlstra (Intel) 
> ---
>  include/linux/tracepoint.h |   13 -
>  kernel/trace/trace.c   |3 +++
>  2 files changed, 15 insertions(+), 1 deletion(-)
> 
> --- a/include/linux/tracepoint.h
> +++ b/include/linux/tracepoint.h
> @@ -178,6 +178,16 @@ static inline struct tracepoint *tracepo
>  #endif /* CONFIG_HAVE_STATIC_CALL */
>  
>  /*
> + * CONFIG_GENERIC_ENTRY archs are expected to have sanitized entry and idle
> + * code that disallow any/all tracing/instrumentation when RCU isn't 
> watching.
> + */
> +#ifdef CONFIG_GENERIC_ENTRY
> +#define RCUIDLE_COND(rcuidle)(rcuidle)
> +#else

Should probably move the below comment to here:

 /* srcu can't be used from NMI */

> +#define RCUIDLE_COND(rcuidle)(rcuidle && in_nmi())
> +#endif
> +
> +/*
>   * it_func[0] is never NULL because there is at least one element in the 
> array
>   * when the array itself is non NULL.
>   */
> @@ -189,7 +199,8 @@ static inline struct tracepoint *tracepo
>   return; \
>   \
>   /* srcu can't be used from NMI */   \

And remove the above.

-- Steve

> - WARN_ON_ONCE(rcuidle && in_nmi());  \
> + if (WARN_ON_ONCE(RCUIDLE_COND(rcuidle)))\
> + return; \
>   \
>   /* keep srcu and sched-rcu usage consistent */  \
>   preempt_disable_notrace();  \
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -3104,6 +3104,9 @@ void __trace_stack(struct trace_array *t
>   return;
>   }
>  
> + if (WARN_ON_ONCE(IS_ENABLED(CONFIG_GENERIC_ENTRY)))
> + return;
> +
>   /*
>* When an NMI triggers, RCU is enabled via ct_nmi_enter(),
>* but if the above rcu_is_watching() failed, then the NMI
> 


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH 24/36] printk: Remove trace_.*_rcuidle() usage

2022-06-14 Thread Steven Rostedt
On Thu, 9 Jun 2022 15:02:20 +0200
Petr Mladek  wrote:

> > I'm somewhat curious whether we can actually remove that trace event.  
> 
> Good question.
> 
> Well, I think that it might be useful. It allows to see trace and
> printk messages together.

Yes people still use it. I was just asked about it at Kernel Recipes. That
is, someone wanted printk mixed in with the tracing, and I told them about
this event (which they didn't know about but was happy to hear that it
existed).

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends

2020-09-24 Thread Steven Rostedt
On Thu, 24 Sep 2020 19:55:10 +0200
Thomas Gleixner  wrote:

> On Thu, Sep 24 2020 at 08:32, Steven Rostedt wrote:
> > On Thu, 24 Sep 2020 08:57:52 +0200
> > Thomas Gleixner  wrote:
> >  
> >> > Now as for migration disabled nesting, at least now we would have
> >> > groupings of this, and perhaps the theorists can handle that. I mean,
> >> > how is this much different that having a bunch of tasks blocked on a
> >> > mutex with the owner is pinned on a CPU?
> >> >
> >> > migrate_disable() is a BKL of pinning affinity.
> >> 
> >> No. That's just wrong. preempt disable is a concurrency control,  
> >
> > I think you totally misunderstood what I was saying. The above wasn't about
> > comparing preempt_disable to migrate_disable. It was comparing
> > migrate_disable to a chain of tasks blocked on mutexes where the top owner
> > has preempt_disable set. You still have a bunch of tasks that can't move to
> > other CPUs.  
> 
> What? The top owner does not prevent any task from moving. The tasks
> cannot move because they are blocked on the mutex, which means they are
> not runnable and non runnable tasks are not migrated at all.

And neither are migrated disabled tasks preempted by a high priority
task.

> 
> I really don't understand what you are trying to say.

Don't worry about it. I was just making a high level comparison of how
migrate disabled tasks blocked on a higher priority task is similar to
that of tasks blocked on a mutex held by a pinned task that is
preempted by a high priority task. But we can forget this analogy as
it's not appropriate for the current conversation.

> 
> >> > If we only have local_lock() available (even on !RT), then it makes
> >> > the blocking in groups. At least this way you could grep for all the
> >> > different local_locks in the system and plug that into the algorithm
> >> > for WCS, just like one would with a bunch of mutexes.
> >> 
> >> You cannot do that on RT at all where migrate disable is substituting
> >> preempt disable in spin and rw locks. The result would be the same as
> >> with a !RT kernel just with horribly bad performance.  
> >
> > Note, the spin and rwlocks already have a lock associated with them. Why
> > would it be any different on RT? I wasn't suggesting adding another lock
> > inside a spinlock. Why would I recommend THAT? I wasn't recommending
> > blindly replacing migrate_disable() with local_lock(). I just meant expose
> > local_lock() but not migrate_disable().  
> 
> We already exposed local_lock() to non RT and it's for places which do
> preempt_disable() or local_irq_disable() without having a lock
> associated. But both primitives are scope less and therefore behave like
> CPU local BKLs. What local_lock() provides in these cases is:
> 
>   - Making the protection scope clear by associating a named local
> lock which is coverred by lockdep.
> 
>   - It still maps to preempt_disable() or local_irq_disable() in !RT
> kernels
> 
>   - The scope and the named lock allows RT kernels to substitute with
> real (recursion aware) locking primitives which keep preemption and
> interupts enabled, but provide the fine grained protection for the
> scoped critical section.

I'm very much aware of the above.

>   
> So how would you substitute migrate_disable() with a local_lock()? You
> can't. Again migrate_disable() is NOT a concurrency control and
> therefore it cannot be substituted by any concurrency control primitive.

When I was first writing my email, I was writing about a way to replace
migrate_disable with a construct similar to local locks without
actually mentioning local locks, but then rewrote it to state local
locks, trying to simplify what I was writing. I shouldn't have done
that, because it portrayed that I wanted to use local_lock()
unmodified. I was actually thinking of a new construct that was similar
but not exactly the same as local lock.

But this will just make things more complex and we can forget about it.

I'll wait to see what Peter produces.

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends

2020-09-24 Thread Steven Rostedt
On Thu, 24 Sep 2020 14:42:41 +0200
Peter Zijlstra  wrote:

> On Thu, Sep 24, 2020 at 08:32:41AM -0400, Steven Rostedt wrote:
> > Anyway, instead of blocking. What about having a counter of number of
> > migrate disabled tasks per cpu, and when taking a migrate_disable(), and 
> > there's
> > already another task with migrate_disabled() set, and the current task has
> > an affinity greater than 1, it tries to migrate to another CPU?  
> 
> That doesn't solve the problem. On wakeup we should already prefer an
> idle CPU over one running a (RT) task, but you can always wake more
> tasks than there's CPUs around and you'll _have_ to stack at some point.

Yes, understood.

> 
> The trick is how to unstack them correctly. We need to detect when a
> migrate_disable() task _should_ start running again, and migrate away
> whoever is in the way at that point.
> 
> It turns out, that getting selected for pull-balance is exactly that
> condition, and clearly a migrate_disable() task cannot be pulled, but we
> can use that signal to try and pull away the running task that's in the
> way.

Unless of course that running task is in a migrate disable section
itself ;-)

But I guess we will always have that SHC, and there will always be a
scenario that you can't balance properly. But hopefully in practice we
wont see that.

How to handle kmap_local(), will migrate_disable() be used only for
32bit or, for consistency, will it also apply to 64bit?

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends

2020-09-24 Thread Steven Rostedt
On Thu, 24 Sep 2020 08:57:52 +0200
Thomas Gleixner  wrote:

> > Now as for migration disabled nesting, at least now we would have
> > groupings of this, and perhaps the theorists can handle that. I mean,
> > how is this much different that having a bunch of tasks blocked on a
> > mutex with the owner is pinned on a CPU?
> >
> > migrate_disable() is a BKL of pinning affinity.  
> 
> No. That's just wrong. preempt disable is a concurrency control,

I think you totally misunderstood what I was saying. The above wasn't about
comparing preempt_disable to migrate_disable. It was comparing
migrate_disable to a chain of tasks blocked on mutexes where the top owner
has preempt_disable set. You still have a bunch of tasks that can't move to
other CPUs.


> > If we only have local_lock() available (even on !RT), then it makes
> > the blocking in groups. At least this way you could grep for all the
> > different local_locks in the system and plug that into the algorithm
> > for WCS, just like one would with a bunch of mutexes.  
> 
> You cannot do that on RT at all where migrate disable is substituting
> preempt disable in spin and rw locks. The result would be the same as
> with a !RT kernel just with horribly bad performance.

Note, the spin and rwlocks already have a lock associated with them. Why
would it be any different on RT? I wasn't suggesting adding another lock
inside a spinlock. Why would I recommend THAT? I wasn't recommending
blindly replacing migrate_disable() with local_lock(). I just meant expose
local_lock() but not migrate_disable().

> 
> That means the stacking problem has to be solved anyway.
> 
> So why on earth do you want to create yet another special duct tape case
> for kamp_local() which proliferates inconsistency instead of aiming for
> consistency accross all preemption models?

The idea was to help with the scheduling issue.

Anyway, instead of blocking. What about having a counter of number of
migrate disabled tasks per cpu, and when taking a migrate_disable(), and there's
already another task with migrate_disabled() set, and the current task has
an affinity greater than 1, it tries to migrate to another CPU?

This way migrate_disable() is less likely to have a bunch of tasks blocked
on one CPU serialized by each task exiting the migrate_disable() section.

Yes, there's more overhead, but it only happens if multiple tasks are in a
migrate disable section on the same CPU.

-- Steve


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends

2020-09-23 Thread Steven Rostedt
On Wed, 23 Sep 2020 22:55:54 +0200
Thomas Gleixner  wrote:

> > Perhaps make migrate_disable() an anonymous local_lock()?
> >
> > This should lower the SHC in theory, if you can't have stacked migrate
> > disables on the same CPU.  
> 
> I'm pretty sure this ends up in locking hell pretty fast and aside of
> that it's not working for scenarios like:
> 
>  kmap_local();
>migrate_disable();
>...
> 
>  copy_from_user()
> -> #PF
>-> schedule()  
> 
> which brought us into that discussion in the first place. You would stop
> any other migrate disable user from running until the page fault is
> resolved...

Then scratch the idea of having anonymous local_lock() and just bring
local_lock in directly? Then have a kmap local lock, which would only
block those that need to do a kmap.

Now as for migration disabled nesting, at least now we would have
groupings of this, and perhaps the theorists can handle that. I mean,
how is this much different that having a bunch of tasks blocked on a
mutex with the owner is pinned on a CPU?

migrate_disable() is a BKL of pinning affinity. If we only have
local_lock() available (even on !RT), then it makes the blocking in
groups. At least this way you could grep for all the different
local_locks in the system and plug that into the algorithm for WCS,
just like one would with a bunch of mutexes.

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends

2020-09-23 Thread Steven Rostedt
On Wed, 23 Sep 2020 10:40:32 +0200
pet...@infradead.org wrote:

> However, with migrate_disable() we can have each task preempted in a
> migrate_disable() region, worse we can stack them all on the _same_ CPU
> (super ridiculous odds, sure). And then we end up only able to run one
> task, with the rest of the CPUs picking their nose.

What if we just made migrate_disable() a local_lock() available for !RT?

I mean make it a priority inheritance PER CPU lock.

That is, no two tasks could do a migrate_disable() on the same CPU? If
one task does a migrate_disable() and then gets preempted and the
preempting task does a migrate_disable() on the same CPU, it will block
and wait for the first task to do a migrate_enable().

No two tasks on the same CPU could enter the migrate_disable() section
simultaneously, just like no two tasks could enter a preempt_disable()
section.

In essence, we just allow local_lock() to be used for both RT and !RT.

Perhaps make migrate_disable() an anonymous local_lock()?

This should lower the SHC in theory, if you can't have stacked migrate
disables on the same CPU.

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [RFC] ARC: initial ftrace support

2020-04-02 Thread Steven Rostedt
On Thu, 2 Apr 2020 01:17:01 +
Vineet Gupta  wrote:

> +CC Claudiu
> 
> On 3/27/20 10:10 AM, Steven Rostedt wrote:
> > On Fri, 27 Mar 2020 18:53:55 +0300
> > Eugeniy Paltsev  wrote:  
> 
> Maybe add a comment that gcc does the heavy lifting: I have following in glibc
> 
> +/* this is very simple as gcc does all the heavy lifting at _mcount call site
> + *  - sets up caller's blink in r0, so frompc is setup correctly
> + *  - preserve argument registers for original call */
> 
> >> +noinline void _mcount(unsigned long parent_ip)
> >> +{
> >> +  unsigned long ip = (unsigned long)__builtin_return_address(0);
> >> +
> >> +  if (unlikely(ftrace_trace_function != ftrace_stub))
> >> +  ftrace_trace_function(ip - MCOUNT_INSN_SIZE, parent_ip,
> >> +NULL, NULL);
> >> +}
> >> +EXPORT_SYMBOL(_mcount);  
> > 
> > So, ARCv2 allows the _mcount code to be written in C? Nice!  
> 
> Yeah, the gcc backend for -pg was overhauled recently so it is a first class 
> "lib
> call" meaning we get all the register save/restore for free as well as caller 
> PC
> (blink) as explicit argument to _mcount
> 
> void bar(int a, int b, int c) {
>   printf("%d\n", a, b, c);
> }
> 
> bar:
>   push_s  blink
>   std.a r14,[sp,-8]
>   push_s  r13
>   mov_s   r14,r1
>   mov_s   r13,r0
>   mov_s   r0,blink
>   bl.d @_mcount
>   mov_s   r15,r2
> 
>   mov_s   r3,r15   <-- restore args for call

We really don't want this. :-/

This will make it really difficult to implement the dynamic ftrace, and
this causes more overhead when tracing is not enabled.

Also, ftrace is much more complex, and this will make it difficult to have
function graph tracing and other features.

Gcc has an "instrument-functions" which people asked me why I didn't go
that route, as it lets you do the same (call C code), and its because of
the overhead it adds to each function that I turned it down.

-- Steve



>   mov_s   r2,r14
>   mov_s   r1,r13
>   mov_s   r0,@.LC0
>   ld  blink,[sp,12]
>   pop_s   r13
>   b.d @printf
>   ldd.ab r14,[sp,12]
> 
> @Eugeniy, this patch looks ok to me, but a word of caution. This won't work 
> with
> elf32 toolchain which some of the build systems tend to use (Alexey ?)
> 
> The above _mcount semantics is only implemented for the linux tool-chains.
> elf32-gcc generates "legacy" __mcount (2 underscores, blink not provided as 
> arg)
> likely done by Claudiu to keep newlib stuff unchanged. Perhaps elf32 gcc can 
> add a
> toggle to get new _mcount.
> 
> And this is conditional to ARCv2 due to future ties into dynamic ftrace and
> instruction fudging etc ? We may have to revisit that for BE anyhow given 
> such a
> customer lining up.
> 
> -Vineet


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [RFC] ARC: initial ftrace support

2020-03-27 Thread Steven Rostedt
On Fri, 27 Mar 2020 18:53:55 +0300
Eugeniy Paltsev  wrote:

> +
> +noinline void _mcount(unsigned long parent_ip)
> +{
> + unsigned long ip = (unsigned long)__builtin_return_address(0);
> +
> + if (unlikely(ftrace_trace_function != ftrace_stub))
> + ftrace_trace_function(ip - MCOUNT_INSN_SIZE, parent_ip,
> +   NULL, NULL);
> +}
> +EXPORT_SYMBOL(_mcount);

So, ARCv2 allows the _mcount code to be written in C? Nice!

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH 00/50] Add log level to show_stack()

2019-11-13 Thread Steven Rostedt
On Wed, 13 Nov 2019 09:47:22 +0100
Petr Mladek  wrote:

> At the moment, I am in favor of this patchset. It is huge and
> needed a lot of manual work. But the result is straightforward and
> easy to understand.

I'm in favor of this patchset too. If there's other areas that need to
adjust the current loglevel (say per task context), then we can cross
that bridge when the need arises. But I don't want to over engineer
this as the stack trace logic should have a way to explicitly state how
important this stack trace really is (or better yet, we should be
removing stack traces that are not important!)

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH 00/50] Add log level to show_stack()

2019-11-13 Thread Steven Rostedt
On Wed, 6 Nov 2019 23:25:13 +
Russell King - ARM Linux admin  wrote:

> On Wed, Nov 06, 2019 at 09:34:40PM +0100, Peter Zijlstra wrote:
> > I suppose I'm surprised there are backtraces that are not important.
> > Either badness happened and it needs printing, or the user asked for it
> > and it needs printing.  
> 
> Or utterly meaningless.
> 
> > Perhaps we should be removing backtraces if they're not important
> > instead of allowing to print them as lower loglevels?  
> 
> Definitely!  WARN_ON() is well overused - and as is typical, used
> without much thought.  Bound to happen after Linus got shirty about
> BUG_ON() being over used.  Everyone just grabbed the next nearest thing
> to assert().
> 
> As a kind of example, I've recently come across one WARN_ON() in a
> driver subsystem (that shall remain nameless at the moment) which very
> likely has multiple different devices on a platform.  The WARN_ON()
> triggers as a result of a problem with the hardware, but because it's a
> WARN_ON(), you've no idea which device has a problem.  The backtrace is
> mostly meaningless.  So you know that a problem has occurred, but the
> kernel prints *useless* backtrace to let you know, and totally omits
> the *useful* information.
> 

I would like to bring up a topic for the next maintainers summit
(although I may not even be there), that we define a clear use of
WARN_ON(). I use it only if the code does something I do not expect it
to do, and is considered a bug in the code if it triggers. But it
appears that some drivers use it for "oh I didn't realize this hardware
does something I didn't expect". And is ignored when the warn on is
triggered and reported, with "you have buggy hardware" but my hardware
appears to work just fine!

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH 00/50] Add log level to show_stack()

2019-11-13 Thread Steven Rostedt
On Wed, 6 Nov 2019 21:34:40 +0100
Peter Zijlstra  wrote:

> I suppose I'm surprised there are backtraces that are not important.
> Either badness happened and it needs printing, or the user asked for it
> and it needs printing.

Unfortunately that is the case. As my tests will fail if a backtrace is
detected.

> 
> Perhaps we should be removing backtraces if they're not important
> instead of allowing to print them as lower loglevels?

I usually end up removing backtraces for my tests, so I'm for this.
Specifically this happens in the drm and i915 drivers :-p

-- Steve


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH 00/50] Add log level to show_stack()

2019-11-13 Thread Steven Rostedt
On Tue, 12 Nov 2019 11:17:47 +0900
Sergey Senozhatsky  wrote:

> void show_stack(struct task_struct *task, unsigned long *sp, int log_level)
> {
>   printk_emergency_enter(log_level);
>   __show_stack(task, sp);
>   printk_emergency_exit();
> }
> // - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - //
> 
> show_stack() never schedules, disabling preemption around it should
> not change anything. Should it be interrupted, we will handle it via
> preempt count.

Please no! The whole point of the printk rewrite was to allow for
printk to be preemptible and used in more contexts. The show_stack() can
be all over the place and is not a fast function. Let's not disable
preemption for it.

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH 00/50] Add log level to show_stack()

2019-11-13 Thread Steven Rostedt
On Tue, 12 Nov 2019 13:44:47 +0900
Sergey Senozhatsky  wrote:

> > > I do recall that we talked about per-CPU printk state bit which would
> > > start/end "just print it" section. We probably can extend it to "just
> > > log_store" type of functionality. Doesn't look like a very bad idea.  
> > 
> > The problem with per-CPU printk is that we would need to disable
> > interrupts.  
> 
> Or disable preemption and have loglevel per-CPU and per-context.
> preempt_count can navigate us to the right context loglevel on
> particular CPU. I'm talking here only about backtrace (error)
> reporting contexts. Those can be atomic perfectly fine.

With my real-time hat on, I'm totally against disabling of preemption
for this purpose.

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH 5/6 v3] syscalls: Remove start and number from syscall_get_arguments() args

2019-04-04 Thread Steven Rostedt
On Thu, 4 Apr 2019 21:17:58 +0300
"Dmitry V. Levin"  wrote:

> There are several places listed below where I'd prefer to see more readable
> equivalents, but feel free to leave it to respective arch maintainers.

I was going to do similar changes, but figured I'd do just that (let
the arch maintainers further optimize the code). I made this more about
fixing the interface and less about the optimization and clean ups that
it can allow.

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[PATCH 6/6 v3] syscalls: Remove start and number from syscall_set_arguments() args

2019-04-01 Thread Steven Rostedt
From: "Steven Rostedt (VMware)" 

After removing the start and count arguments of syscall_get_arguments() it
seems reasonable to remove them from syscall_set_arguments(). Note, as of
today, there are no users of syscall_set_arguments(). But we are told that
there will be soon. But for now, at least make it consistent with
syscall_get_arguments().

Link: http://lkml.kernel.org/r/20190327222014.ga32...@altlinux.org

Cc: Oleg Nesterov 
Cc: Thomas Gleixner 
Cc: Kees Cook 
Cc: Andy Lutomirski 
Cc: Dominik Brodowski 
Cc: Dave Martin 
Cc: "Dmitry V. Levin" 
Cc: x...@kernel.org
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-ker...@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-c6x-...@linux-c6x.org
Cc: uclinux-h8-de...@lists.sourceforge.jp
Cc: linux-hexa...@vger.kernel.org
Cc: linux-i...@vger.kernel.org
Cc: linux-m...@vger.kernel.org
Cc: nios2-...@lists.rocketboards.org
Cc: openr...@lists.librecores.org
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux-ri...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@lists.infradead.org
Cc: linux-xte...@linux-xtensa.org
Cc: linux-a...@vger.kernel.org
Signed-off-by: Steven Rostedt (VMware) 
---
 arch/arm/include/asm/syscall.h| 22 ++---
 arch/arm64/include/asm/syscall.h  | 22 ++---
 arch/c6x/include/asm/syscall.h| 38 +++
 arch/csky/include/asm/syscall.h   | 14 ++
 arch/ia64/include/asm/syscall.h   | 10 ++--
 arch/ia64/kernel/ptrace.c |  7 ++-
 arch/microblaze/include/asm/syscall.h |  4 +-
 arch/nds32/include/asm/syscall.h  | 29 ++-
 arch/nios2/include/asm/syscall.h  | 42 +++-
 arch/openrisc/include/asm/syscall.h   |  6 +--
 arch/powerpc/include/asm/syscall.h|  7 +--
 arch/riscv/include/asm/syscall.h  | 13 ++---
 arch/s390/include/asm/syscall.h   | 11 ++---
 arch/sh/include/asm/syscall_32.h  | 21 +++-
 arch/sh/include/asm/syscall_64.h  |  4 +-
 arch/sparc/include/asm/syscall.h  |  7 ++-
 arch/um/include/asm/syscall-generic.h | 39 +++
 arch/x86/include/asm/syscall.h| 69 +++
 arch/xtensa/include/asm/syscall.h | 17 ++-
 include/asm-generic/syscall.h | 10 +---
 20 files changed, 88 insertions(+), 304 deletions(-)

diff --git a/arch/arm/include/asm/syscall.h b/arch/arm/include/asm/syscall.h
index db969a2972ae..080ce70cab12 100644
--- a/arch/arm/include/asm/syscall.h
+++ b/arch/arm/include/asm/syscall.h
@@ -65,26 +65,12 @@ static inline void syscall_get_arguments(struct task_struct 
*task,
 
 static inline void syscall_set_arguments(struct task_struct *task,
 struct pt_regs *regs,
-unsigned int i, unsigned int n,
 const unsigned long *args)
 {
-   if (n == 0)
-   return;
-
-   if (i + n > SYSCALL_MAX_ARGS) {
-   pr_warn("%s called with max args %d, handling only %d\n",
-   __func__, i + n, SYSCALL_MAX_ARGS);
-   n = SYSCALL_MAX_ARGS - i;
-   }
-
-   if (i == 0) {
-   regs->ARM_ORIG_r0 = args[0];
-   args++;
-   i++;
-   n--;
-   }
-
-   memcpy(>ARM_r0 + i, args, n * sizeof(args[0]));
+   regs->ARM_ORIG_r0 = args[0];
+   args++;
+
+   memcpy(>ARM_r0 + 1, args, 5 * sizeof(args[0]));
 }
 
 static inline int syscall_get_arch(void)
diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
index 55b2dab21023..a179df3674a1 100644
--- a/arch/arm64/include/asm/syscall.h
+++ b/arch/arm64/include/asm/syscall.h
@@ -75,26 +75,12 @@ static inline void syscall_get_arguments(struct task_struct 
*task,
 
 static inline void syscall_set_arguments(struct task_struct *task,
 struct pt_regs *regs,
-unsigned int i, unsigned int n,
 const unsigned long *args)
 {
-   if (n == 0)
-   return;
-
-   if (i + n > SYSCALL_MAX_ARGS) {
-   pr_warning("%s called with max args %d, handling only %d\n",
-  __func__, i + n, SYSCALL_MAX_ARGS);
-   n = SYSCALL_MAX_ARGS - i;
-   }
-
-   if (i == 0) {
-   regs->orig_x0 = args[0];
-   args++;
-   i++;
-   n--;
-   }
-
-   memcpy(>regs[i], args, n * sizeof(args[0]));
+   regs->orig_x0 = args[0];
+   args++;
+
+   memcpy(>regs[1], args, 5 * sizeof(args[0]));
 }
 
 /*
diff --git a/arch/c6x/include/asm/syscall.h b/arch/c6x/include/asm/syscall.h
index 06db3251926b..15ba8599858e 100644
--- a/arch/c6x/include/asm/syscall.h
+++ b/arch/c6x/include/asm/syscall.h
@@ -59,40 +59,14

[PATCH 5/6 v3] syscalls: Remove start and number from syscall_get_arguments() args

2019-04-01 Thread Steven Rostedt
From: "Steven Rostedt (Red Hat)" 

At Linux Plumbers, Andy Lutomirski approached me and pointed out that the
function call syscall_get_arguments() implemented in x86 was horribly
written and not optimized for the standard case of passing in 0 and 6 for
the starting index and the number of system calls to get. When looking at
all the users of this function, I discovered that all instances pass in only
0 and 6 for these arguments. Instead of having this function handle
different cases that are never used, simply rewrite it to return the first 6
arguments of a system call.

This should help out the performance of tracing system calls by ptrace,
ftrace and perf.

Link: http://lkml.kernel.org/r/20161107213233.754809...@goodmis.org

Cc: Oleg Nesterov 
Cc: Thomas Gleixner 
Cc: Kees Cook 
Cc: Andy Lutomirski 
Cc: Dominik Brodowski 
Cc: Dave Martin 
Cc: "Dmitry V. Levin" 
Cc: x...@kernel.org
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-ker...@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-c6x-...@linux-c6x.org
Cc: uclinux-h8-de...@lists.sourceforge.jp
Cc: linux-hexa...@vger.kernel.org
Cc: linux-i...@vger.kernel.org
Cc: linux-m...@vger.kernel.org
Cc: nios2-...@lists.rocketboards.org
Cc: openr...@lists.librecores.org
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux-ri...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@lists.infradead.org
Cc: linux-xte...@linux-xtensa.org
Cc: linux-a...@vger.kernel.org
Reported-by: Andy Lutomirski 
Signed-off-by: Steven Rostedt (VMware) 
---
 arch/arc/include/asm/syscall.h|  7 ++-
 arch/arm/include/asm/syscall.h| 23 ++---
 arch/arm64/include/asm/syscall.h  | 22 ++--
 arch/c6x/include/asm/syscall.h| 41 +++
 arch/csky/include/asm/syscall.h   | 14 ++---
 arch/h8300/include/asm/syscall.h  | 34 +++--
 arch/hexagon/include/asm/syscall.h|  4 +-
 arch/ia64/include/asm/syscall.h   |  5 +-
 arch/microblaze/include/asm/syscall.h |  4 +-
 arch/mips/include/asm/syscall.h   |  3 +-
 arch/mips/kernel/ptrace.c |  2 +-
 arch/nds32/include/asm/syscall.h  | 33 +++-
 arch/nios2/include/asm/syscall.h  | 42 +++
 arch/openrisc/include/asm/syscall.h   |  6 +--
 arch/parisc/include/asm/syscall.h | 30 +++
 arch/powerpc/include/asm/syscall.h|  8 ++-
 arch/riscv/include/asm/syscall.h  | 13 ++---
 arch/s390/include/asm/syscall.h   | 17 ++-
 arch/sh/include/asm/syscall_32.h  | 26 +++---
 arch/sh/include/asm/syscall_64.h  |  4 +-
 arch/sparc/include/asm/syscall.h  |  4 +-
 arch/um/include/asm/syscall-generic.h | 39 +++---
 arch/x86/include/asm/syscall.h| 73 +++
 arch/xtensa/include/asm/syscall.h | 16 ++
 include/asm-generic/syscall.h | 11 ++--
 include/trace/events/syscalls.h   |  2 +-
 kernel/seccomp.c  |  2 +-
 kernel/trace/trace_syscalls.c |  4 +-
 lib/syscall.c |  2 +-
 29 files changed, 113 insertions(+), 378 deletions(-)

diff --git a/arch/arc/include/asm/syscall.h b/arch/arc/include/asm/syscall.h
index 29de09804306..c7a4201ed62b 100644
--- a/arch/arc/include/asm/syscall.h
+++ b/arch/arc/include/asm/syscall.h
@@ -55,12 +55,11 @@ syscall_set_return_value(struct task_struct *task, struct 
pt_regs *regs,
  */
 static inline void
 syscall_get_arguments(struct task_struct *task, struct pt_regs *regs,
- unsigned int i, unsigned int n, unsigned long *args)
+ unsigned long *args)
 {
unsigned long *inside_ptregs = &(regs->r0);
-   inside_ptregs -= i;
-
-   BUG_ON((i + n) > 6);
+   unsigned int n = 6;
+   unsigned int i = 0;
 
while (n--) {
args[i++] = (*inside_ptregs);
diff --git a/arch/arm/include/asm/syscall.h b/arch/arm/include/asm/syscall.h
index 06dea6bce293..db969a2972ae 100644
--- a/arch/arm/include/asm/syscall.h
+++ b/arch/arm/include/asm/syscall.h
@@ -55,29 +55,12 @@ static inline void syscall_set_return_value(struct 
task_struct *task,
 
 static inline void syscall_get_arguments(struct task_struct *task,
 struct pt_regs *regs,
-unsigned int i, unsigned int n,
 unsigned long *args)
 {
-   if (n == 0)
-   return;
-
-   if (i + n > SYSCALL_MAX_ARGS) {
-   unsigned long *args_bad = args + SYSCALL_MAX_ARGS - i;
-   unsigned int n_bad = n + i - SYSCALL_MAX_ARGS;
-   pr_warn("%s called with max args %d, handling only %d\n",
-   __func__, i + n, SYSCALL_MAX_ARGS);
-   memset(args_bad, 0, n_bad * sizeof(args[0]));
-   n = SYSCALL_MAX_ARGS - i;
-   }
-
-   if (i == 0) {

[RFC][PATCH 3/4 v2] syscalls: Remove start and number from syscall_get_arguments() args

2019-03-28 Thread Steven Rostedt
From: "Steven Rostedt (Red Hat)" 

At Linux Plumbers, Andy Lutomirski approached me and pointed out that the
function call syscall_get_arguments() implemented in x86 was horribly
written and not optimized for the standard case of passing in 0 and 6 for
the starting index and the number of system calls to get. When looking at
all the users of this function, I discovered that all instances pass in only
0 and 6 for these arguments. Instead of having this function handle
different cases that are never used, simply rewrite it to return the first 6
arguments of a system call.

This should help out the performance of tracing system calls by ptrace,
ftrace and perf.

Link: http://lkml.kernel.org/r/20161107213233.754809...@goodmis.org

Cc: Oleg Nesterov 
Cc: Thomas Gleixner 
Cc: Kees Cook 
Cc: Andy Lutomirski 
Cc: Dominik Brodowski 
Cc: Dave Martin 
Cc: "Dmitry V. Levin" 
Cc: x...@kernel.org
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-ker...@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-c6x-...@linux-c6x.org
Cc: uclinux-h8-de...@lists.sourceforge.jp
Cc: linux-hexa...@vger.kernel.org
Cc: linux-i...@vger.kernel.org
Cc: linux-m...@vger.kernel.org
Cc: nios2-...@lists.rocketboards.org
Cc: openr...@lists.librecores.org
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux-ri...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@lists.infradead.org
Cc: linux-xte...@linux-xtensa.org
Cc: linux-a...@vger.kernel.org
Reported-by: Andy Lutomirski 
Signed-off-by: Steven Rostedt (VMware) 
---
 arch/arc/include/asm/syscall.h|  7 ++-
 arch/arm/include/asm/syscall.h| 23 ++---
 arch/arm64/include/asm/syscall.h  | 22 ++--
 arch/c6x/include/asm/syscall.h| 41 +++
 arch/csky/include/asm/syscall.h   | 13 ++---
 arch/h8300/include/asm/syscall.h  | 34 +++--
 arch/hexagon/include/asm/syscall.h|  4 +-
 arch/ia64/include/asm/syscall.h   |  5 +-
 arch/microblaze/include/asm/syscall.h |  4 +-
 arch/mips/include/asm/syscall.h   |  3 +-
 arch/mips/kernel/ptrace.c |  2 +-
 arch/nds32/include/asm/syscall.h  | 33 +++-
 arch/nios2/include/asm/syscall.h  | 42 +++
 arch/openrisc/include/asm/syscall.h   |  6 +--
 arch/parisc/include/asm/syscall.h | 30 +++
 arch/powerpc/include/asm/syscall.h|  8 ++-
 arch/riscv/include/asm/syscall.h  | 12 ++---
 arch/s390/include/asm/syscall.h   | 17 ++-
 arch/sh/include/asm/syscall_32.h  | 26 +++---
 arch/sh/include/asm/syscall_64.h  |  4 +-
 arch/sparc/include/asm/syscall.h  |  4 +-
 arch/um/include/asm/syscall-generic.h | 39 +++---
 arch/x86/include/asm/syscall.h| 73 +++
 arch/xtensa/include/asm/syscall.h | 16 ++
 include/asm-generic/syscall.h | 11 ++--
 include/trace/events/syscalls.h   |  2 +-
 kernel/seccomp.c  |  2 +-
 kernel/trace/trace_syscalls.c |  4 +-
 lib/syscall.c |  2 +-
 29 files changed, 113 insertions(+), 376 deletions(-)

diff --git a/arch/arc/include/asm/syscall.h b/arch/arc/include/asm/syscall.h
index 29de09804306..c7a4201ed62b 100644
--- a/arch/arc/include/asm/syscall.h
+++ b/arch/arc/include/asm/syscall.h
@@ -55,12 +55,11 @@ syscall_set_return_value(struct task_struct *task, struct 
pt_regs *regs,
  */
 static inline void
 syscall_get_arguments(struct task_struct *task, struct pt_regs *regs,
- unsigned int i, unsigned int n, unsigned long *args)
+ unsigned long *args)
 {
unsigned long *inside_ptregs = &(regs->r0);
-   inside_ptregs -= i;
-
-   BUG_ON((i + n) > 6);
+   unsigned int n = 6;
+   unsigned int i = 0;
 
while (n--) {
args[i++] = (*inside_ptregs);
diff --git a/arch/arm/include/asm/syscall.h b/arch/arm/include/asm/syscall.h
index 06dea6bce293..db969a2972ae 100644
--- a/arch/arm/include/asm/syscall.h
+++ b/arch/arm/include/asm/syscall.h
@@ -55,29 +55,12 @@ static inline void syscall_set_return_value(struct 
task_struct *task,
 
 static inline void syscall_get_arguments(struct task_struct *task,
 struct pt_regs *regs,
-unsigned int i, unsigned int n,
 unsigned long *args)
 {
-   if (n == 0)
-   return;
-
-   if (i + n > SYSCALL_MAX_ARGS) {
-   unsigned long *args_bad = args + SYSCALL_MAX_ARGS - i;
-   unsigned int n_bad = n + i - SYSCALL_MAX_ARGS;
-   pr_warn("%s called with max args %d, handling only %d\n",
-   __func__, i + n, SYSCALL_MAX_ARGS);
-   memset(args_bad, 0, n_bad * sizeof(args[0]));
-   n = SYSCALL_MAX_ARGS - i;
-   }
-
-   if (i == 0) {

[RFC][PATCH 4/4 v2] syscalls: Remove start and number from syscall_set_arguments() args

2019-03-28 Thread Steven Rostedt
From: "Steven Rostedt (VMware)" 

After removing the start and count arguments of syscall_get_arguments() it
seems reasonable to remove them from syscall_set_arguments(). Note, as of
today, there are no users of syscall_set_arguments(). But we are told that
there will be soon. But for now, at least make it consistent with
syscall_get_arguments().

Link: http://lkml.kernel.org/r/20190327222014.ga32...@altlinux.org

Cc: Oleg Nesterov 
Cc: Thomas Gleixner 
Cc: Kees Cook 
Cc: Andy Lutomirski 
Cc: Dominik Brodowski 
Cc: Dave Martin 
Cc: "Dmitry V. Levin" 
Cc: x...@kernel.org
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-ker...@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-c6x-...@linux-c6x.org
Cc: uclinux-h8-de...@lists.sourceforge.jp
Cc: linux-hexa...@vger.kernel.org
Cc: linux-i...@vger.kernel.org
Cc: linux-m...@vger.kernel.org
Cc: nios2-...@lists.rocketboards.org
Cc: openr...@lists.librecores.org
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux-ri...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@lists.infradead.org
Cc: linux-xte...@linux-xtensa.org
Cc: linux-a...@vger.kernel.org
Signed-off-by: Steven Rostedt (VMware) 
---
 arch/arm/include/asm/syscall.h| 22 ++---
 arch/arm64/include/asm/syscall.h  | 22 ++---
 arch/c6x/include/asm/syscall.h| 38 +++
 arch/csky/include/asm/syscall.h   | 13 ++---
 arch/ia64/include/asm/syscall.h   | 10 ++--
 arch/ia64/kernel/ptrace.c |  7 ++-
 arch/microblaze/include/asm/syscall.h |  4 +-
 arch/nds32/include/asm/syscall.h  | 29 ++-
 arch/nios2/include/asm/syscall.h  | 42 +++-
 arch/openrisc/include/asm/syscall.h   |  6 +--
 arch/powerpc/include/asm/syscall.h|  7 +--
 arch/riscv/include/asm/syscall.h  | 12 ++---
 arch/s390/include/asm/syscall.h   | 11 ++---
 arch/sh/include/asm/syscall_32.h  | 21 +++-
 arch/sh/include/asm/syscall_64.h  |  4 +-
 arch/sparc/include/asm/syscall.h  |  7 ++-
 arch/um/include/asm/syscall-generic.h | 39 +++
 arch/x86/include/asm/syscall.h| 69 +++
 arch/xtensa/include/asm/syscall.h | 17 ++-
 include/asm-generic/syscall.h | 10 +---
 20 files changed, 88 insertions(+), 302 deletions(-)

diff --git a/arch/arm/include/asm/syscall.h b/arch/arm/include/asm/syscall.h
index db969a2972ae..080ce70cab12 100644
--- a/arch/arm/include/asm/syscall.h
+++ b/arch/arm/include/asm/syscall.h
@@ -65,26 +65,12 @@ static inline void syscall_get_arguments(struct task_struct 
*task,
 
 static inline void syscall_set_arguments(struct task_struct *task,
 struct pt_regs *regs,
-unsigned int i, unsigned int n,
 const unsigned long *args)
 {
-   if (n == 0)
-   return;
-
-   if (i + n > SYSCALL_MAX_ARGS) {
-   pr_warn("%s called with max args %d, handling only %d\n",
-   __func__, i + n, SYSCALL_MAX_ARGS);
-   n = SYSCALL_MAX_ARGS - i;
-   }
-
-   if (i == 0) {
-   regs->ARM_ORIG_r0 = args[0];
-   args++;
-   i++;
-   n--;
-   }
-
-   memcpy(>ARM_r0 + i, args, n * sizeof(args[0]));
+   regs->ARM_ORIG_r0 = args[0];
+   args++;
+
+   memcpy(>ARM_r0 + 1, args, 5 * sizeof(args[0]));
 }
 
 static inline int syscall_get_arch(void)
diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
index 55b2dab21023..a179df3674a1 100644
--- a/arch/arm64/include/asm/syscall.h
+++ b/arch/arm64/include/asm/syscall.h
@@ -75,26 +75,12 @@ static inline void syscall_get_arguments(struct task_struct 
*task,
 
 static inline void syscall_set_arguments(struct task_struct *task,
 struct pt_regs *regs,
-unsigned int i, unsigned int n,
 const unsigned long *args)
 {
-   if (n == 0)
-   return;
-
-   if (i + n > SYSCALL_MAX_ARGS) {
-   pr_warning("%s called with max args %d, handling only %d\n",
-  __func__, i + n, SYSCALL_MAX_ARGS);
-   n = SYSCALL_MAX_ARGS - i;
-   }
-
-   if (i == 0) {
-   regs->orig_x0 = args[0];
-   args++;
-   i++;
-   n--;
-   }
-
-   memcpy(>regs[i], args, n * sizeof(args[0]));
+   regs->orig_x0 = args[0];
+   args++;
+
+   memcpy(>regs[1], args, 5 * sizeof(args[0]));
 }
 
 /*
diff --git a/arch/c6x/include/asm/syscall.h b/arch/c6x/include/asm/syscall.h
index 06db3251926b..15ba8599858e 100644
--- a/arch/c6x/include/asm/syscall.h
+++ b/arch/c6x/include/asm/syscall.h
@@ -59,40 +59,14

Re: [PATCH] OF: move extern declarations of entry pointers inside ifdef

2017-08-25 Thread Steven Rostedt
On Fri, 25 Aug 2017 15:20:12 +
Eugeniy Paltsev <eugeniy.palt...@synopsys.com> wrote:

> On Fri, 2017-08-25 at 11:12 -0400, Steven Rostedt wrote:
> > On Fri, 25 Aug 2017 18:00:26 +0300
> > Eugeniy Paltsev <eugeniy.palt...@synopsys.com> wrote:
> >   
> > > Move extern declarations of "of_root", "of_chosen", "of_aliases",
> > > "of_stdout" pointers inside "CONFIG_OF" ifdef to be able to get rid
> > > of "CONFIG_OF" ifdef in their usage places.
> > > 
> > > Suggested-by: Steven Rostedt <rost...@goodmis.org>  
> > 
> > Thanks!
> > -- Steve  
> 
> Could you please ACK my patch if it looks good for you?
> Thanks.

I could, but ACKs are usually for the maintainer of the code being
changed. I don't maintain include/linux/of.h

But for what it's worth...

Acked-by: Steven Rostedt (VMware) <rost...@goodmis.org>

-- Steve

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH] OF: move extern declarations of entry pointers inside ifdef

2017-08-25 Thread Steven Rostedt
On Fri, 25 Aug 2017 18:00:26 +0300
Eugeniy Paltsev <eugeniy.palt...@synopsys.com> wrote:

> Move extern declarations of "of_root", "of_chosen", "of_aliases",
> "of_stdout" pointers inside "CONFIG_OF" ifdef to be able to get rid
> of "CONFIG_OF" ifdef in their usage places.
> 
> Suggested-by: Steven Rostedt <rost...@goodmis.org>

Thanks!

-- Steve

> 
> Signed-off-by: Eugeniy Paltsev <eugeniy.palt...@synopsys.com>
> ---

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH] console: don't select first registered console if stdout-path used

2017-08-25 Thread Steven Rostedt
On Fri, 25 Aug 2017 16:14:51 +0300
Eugeniy Paltsev  wrote:

> In the current implementation we take the first console that
> registers if we didn't select one.
> 
> But if we specify console via "stdout-path" property in device tree
> we don't want first console that registers here to be selected.
> Otherwise we may choose wrong console - for example if some console
> is registered earlier than console is pointed in "stdout-path"
> property because console pointed in "stdout-path" property can be add as
> preferred quite late - when it's driver is probed.
> 
> Signed-off-by: Eugeniy Paltsev 
> ---
>  kernel/printk/printk.c | 11 +++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
> index 512f7c2..23262c1 100644
> --- a/kernel/printk/printk.c
> +++ b/kernel/printk/printk.c
> @@ -26,6 +26,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -2431,6 +2432,16 @@ void register_console(struct console *newcon)
>   if (!has_preferred || bcon || !console_drivers)
>   has_preferred = preferred_console >= 0;
>  
> +
> + /*
> +  * If we specify console via "stdout-path" property in device tree
> +  * we don't want first console that registers here to be selected.
> +  */
> +#ifdef CONFIG_OF
> + if (of_stdout)
> + has_preferred = true;
> +#endif

To remove the #ifdef from the code, could the of.h move

#ifdef CONFIG_OF

Above the extern declarations of of_stdout and friends, and then have:

#else /* !CONFIG_OF */

#define of_stdout   NULL

#endif

Then we could just have:

if (of_stdout)
has_preferred = true;

without the ifdefs.

-- Steve

> +
>   /*
>*  See if we want to use this console driver. If we
>*  didn't select a console we take the first one


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc