gt; permissions to access hv_24x7 pmu counters. If not, event_open
> will fail. Hence add a sanity check to see if event_open
> succeeds before proceeding with the test.
>
> Fixes: b20d9215a35f ("perf test: Add event group test for events in multiple
> PMUs")
> Signed-off-
On 14-Oct-22 3:26 PM, Ravi Bangoria wrote:
> On 13-Oct-22 4:29 PM, Peter Zijlstra wrote:
>> On Thu, Oct 13, 2022 at 03:37:23PM +0530, Ravi Bangoria wrote:
>>
>>>> - refcount_t refcount;
>>>> + refcount_t refcount;
On 13-Oct-22 4:29 PM, Peter Zijlstra wrote:
> On Thu, Oct 13, 2022 at 03:37:23PM +0530, Ravi Bangoria wrote:
>
>>> - refcount_t refcount;
>>> + refcount_t refcount; /* event <-> ctx */
>>
>> Ok. We need to
On 13-Oct-22 2:17 AM, Peter Zijlstra wrote:
> On Wed, Oct 12, 2022 at 02:16:29PM +0200, Peter Zijlstra wrote:
>
>> That's the intent yeah. But due to not always holding ctx->mutex over
>> put_pmu_ctx() this might be moot. I'm almost through auditing epc usage
>> and I think ctx->lock is
On 11-Oct-22 11:17 PM, Peter Zijlstra wrote:
> On Tue, Oct 11, 2022 at 04:02:56PM +0200, Peter Zijlstra wrote:
>> On Tue, Oct 11, 2022 at 06:49:55PM +0530, Ravi Bangoria wrote:
>>> On 11-Oct-22 4:59 PM, Peter Zijlstra wrote:
>>>> On Sat, Oct 08, 2022 at 11:54:2
On 11-Oct-22 4:59 PM, Peter Zijlstra wrote:
> On Sat, Oct 08, 2022 at 11:54:24AM +0530, Ravi Bangoria wrote:
>
>> +static void perf_event_swap_task_ctx_data(struct perf_event_context
>> *prev_ctx,
>> + struct perf_event_context *next_c
On 10-Oct-22 3:53 PM, Peter Zijlstra wrote:
> On Tue, Sep 06, 2022 at 11:20:53AM +0530, Ravi Bangoria wrote:
>
>> This one was simple enough so I prepared a patch for this. Let
>> me know if you see any issues with below diff.
>
> I've extraed this as a separate patch
On 10-Oct-22 3:44 PM, Peter Zijlstra wrote:
> On Wed, Sep 07, 2022 at 04:58:49PM +0530, Ravi Bangoria wrote:
>>> -static void
>>> -ctx_flexible_sched_in(struct perf_event_context *ctx,
>>> - struct perf_cpu_context *cpuctx)
>>> +/* XXX .busy t
> -static void
> -ctx_flexible_sched_in(struct perf_event_context *ctx,
> - struct perf_cpu_context *cpuctx)
> +/* XXX .busy thingy from Peter's patch */
> +static void ctx_flexible_sched_in(struct perf_event_context *ctx, struct pmu
> *pmu)
This one turned out to be very easy.
> @@ -9752,10 +9889,13 @@ void perf_tp_event(u16 event_type, u64 count, void
> *record, int entry_size,
> struct trace_entry *entry = record;
>
> rcu_read_lock();
> - ctx = rcu_dereference(task->perf_event_ctxp[perf_sw_context]);
> + ctx =
> So the basic issue I mentioned is that:
>
>
> /*
> * ,[1:n]-.
> * V V
> * perf_event_context <-[1:n]-> perf_event_pmu_context <--- perf_event
> * ^
On 29-Aug-22 8:10 PM, Peter Zijlstra wrote:
> On Mon, Aug 29, 2022 at 02:04:33PM +0200, Peter Zijlstra wrote:
>> On Mon, Aug 29, 2022 at 05:03:47PM +0530, Ravi Bangoria wrote:
>>> @@ -12598,6 +12590,7 @@ EXPORT_SYMBOL_GPL(perf_event_create_kernel_counter);
>>>
>>
s://lore.kernel.org/r/8d91528b-e830-6ad0-8a92-621ce9f944ca%40amd.com
Signed-off-by: Peter Zijlstra
Signed-off-by: Ravi Bangoria
---
This is a 3rd version of perf event context rework and it's quite
stable now, so I thought to remove RFC tag. Previous versions:
RFC v2: https://lore.kernel.
@@ -763,6 +771,14 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
struct codegen_context *
/* dst = *(u16 *)(ul) (src + off) */
case BPF_LDX | BPF_MEM | BPF_H:
case BPF_LDX | BPF_PROBE_MEM | BPF_H:
+ if (BPF_MODE(code) == BPF_PROBE_MEM) {
+
On 7/6/21 3:23 PM, Christophe Leroy wrote:
Le 06/07/2021 à 09:32, Ravi Bangoria a écrit :
BPF load instruction with BPF_PROBE_MEM mode can cause a fault
inside kernel. Append exception table for such instructions
within BPF program.
Can you do the same for 32bit ?
Sure. But before
> TASK_SIZE_MAX, otherwise set dst_reg=0 and move on.
This will catch NULL, valid or invalid userspace pointers. Only bad
kernel pointer will be handled by BPF exception table.
[Alexei suggested for x86]
Suggested-by: Alexei Starovoitov
Signed-off-by: Ravi Bangoria
---
arch/powerpc/net/bpf_jit_com
| b 0x40b0 |
|--|
0x4290 -->| insn=0xfd90 | \ extable entry
| fixup=0xffec | /
0x4298 -->| insn=0xfe14 |
| fixup=0xffec |
+--+
(Addresses shown here are chosen random, not real)
Signed-off-by: Ravi B
In case of extra_pass, we always skips usual JIT passes. Thus
extra_pass is always false while calling bpf_jit_build_body()
and thus it can be removed.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/net/bpf_jit.h| 2 +-
arch/powerpc/net/bpf_jit_comp.c | 6 +++---
arch/powerpc/net
Patch #1, #2 are simple cleanup patches. Patch #3 adds
BPF_PROBE_MEM support with PowerPC 64bit JIT compiler.
Patch #4 adds explicit addr > TASK_SIZE_MAX check to
handle bad userspace pointers.
Ravi Bangoria (4):
bpf powerpc: Remove unused SEEN_STACK
bpf powerpc: Remove extra_pass f
SEEN_STACK is unused on PowerPC. Remove it. Also, have
SEEN_TAILCALL use 0x4000.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/net/bpf_jit.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
index 99fad093f43e
erlapping): Ok
ptrace thread event -> perf other thread & cpu event: Ok
success: ptrace-perf-hwbreak
Signed-off-by: Ravi Bangoria
---
.../selftests/powerpc/ptrace/.gitignore | 1 +
.../testing/selftests/powerpc/ptrace/Makefile | 2 +-
.../powerpc/ptrace/ptrace-perf-hwbreak.
, one is RO, other is WO
TESTED: Process specific, 512 bytes, unaligned
success: perf_hwbreak
Signed-off-by: Ravi Bangoria
---
.../selftests/powerpc/ptrace/perf-hwbreak.c | 552 +-
1 file changed, 551 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/powerpc
perf-hwbreak selftest opens hw-breakpoint event at multiple places for
which it has same code repeated. Coalesce that code into a function.
Signed-off-by: Ravi Bangoria
---
.../selftests/powerpc/ptrace/perf-hwbreak.c | 79 +--
1 file changed, 39 insertions(+), 40 deletions
, len: 6: Ok
PPC_PTRACE_SETHWDEBUG 2, MODE_RANGE, DAWR Overlap, RO, len: 6: Ok
Signed-off-by: Ravi Bangoria
Reviewed-by: Daniel Axtens
---
.../selftests/powerpc/ptrace/ptrace-hwbreak.c | 79 +++
1 file changed, 79 insertions(+)
diff --git a/tools/testing/selftests/powerpc/ptrace
2-1-ravi.bango...@linux.ibm.com
v1->v2:
- Kvm patches are already upstream
- Rebased selftests to powerpc/next
Ravi Bangoria (4):
powerpc/selftests/ptrace-hwbreak: Add testcases for 2nd DAWR
powerpc/selftests/perf-hwbreak: Coalesce event creation code
powerpc/selftests/perf-hwbreak: Add tes
On 4/9/21 12:49 PM, Daniel Axtens wrote:
Hi Ravi,
perf-hwbreak selftest opens hw-breakpoint event at multiple places for
which it has same code repeated. Coalesce that code into a function.
Signed-off-by: Ravi Bangoria
---
.../selftests/powerpc/ptrace/perf-hwbreak.c | 78
On 4/9/21 12:22 PM, Daniel Axtens wrote:
Hi Ravi,
Add selftests to test multiple active DAWRs with ptrace interface.
It would be good if somewhere (maybe in the cover letter) you explain
what DAWR stands for and where to find more information about it. I
found the Power ISA v3.1 Book 3
erlapping): Ok
ptrace thread event -> perf other thread & cpu event: Ok
success: ptrace-perf-hwbreak
Signed-off-by: Ravi Bangoria
---
.../selftests/powerpc/ptrace/.gitignore | 1 +
.../testing/selftests/powerpc/ptrace/Makefile | 2 +-
.../powerpc/ptrace/ptrace-perf-hwbreak.
, one is RO, other is WO
TESTED: Process specific, 512 bytes, unaligned
success: perf_hwbreak
Signed-off-by: Ravi Bangoria
---
.../selftests/powerpc/ptrace/perf-hwbreak.c | 568 +-
1 file changed, 567 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/powerpc
perf-hwbreak selftest opens hw-breakpoint event at multiple places for
which it has same code repeated. Coalesce that code into a function.
Signed-off-by: Ravi Bangoria
---
.../selftests/powerpc/ptrace/perf-hwbreak.c | 78 +--
1 file changed, 38 insertions(+), 40 deletions
, len: 6: Ok
PPC_PTRACE_SETHWDEBUG 2, MODE_RANGE, DAWR Overlap, RO, len: 6: Ok
Signed-off-by: Ravi Bangoria
---
.../selftests/powerpc/ptrace/ptrace-hwbreak.c | 79 +++
1 file changed, 79 insertions(+)
diff --git a/tools/testing/selftests/powerpc/ptrace/ptrace-hwbreak.c
b/tools
Add selftests for 2nd DAWR supported by Power10.
v1:
https://lore.kernel.org/r/20200723102058.312282-1-ravi.bango...@linux.ibm.com
v1->v2:
- Kvm patches are already upstream
- Rebased selftests to powerpc/next
Ravi Bangoria (4):
powerpc/selftests/ptrace-hwbreak: Add testcases for 2nd D
. In this case just print an error and continue.
Signed-off-by: Ravi Bangoria
Acked-by: Naveen N. Rao
Acked-by: Sandipan Das
---
v4:
https://lore.kernel.org/r/20210305115433.140769-1-ravi.bango...@linux.ibm.com
v4->v5:
- Replace SZ_ macros with numbers
arch/powerpc/kernel/uprobes.c
On 3/9/21 4:51 PM, Naveen N. Rao wrote:
On 2021/03/09 08:54PM, Michael Ellerman wrote:
Ravi Bangoria writes:
As per ISA 3.1, prefixed instruction should not cross 64-byte
boundary. So don't allow Uprobe on such prefixed instruction.
There are two ways probed instruction is changed
. In this case just print an error and continue.
Signed-off-by: Ravi Bangoria
Acked-by: Naveen N. Rao
---
v3: https://lore.kernel.org/r/20210304050529.59391-1-ravi.bango...@linux.ibm.com
v3->v4:
- CONFIG_PPC64 check was not required, remove it.
- Use SZ_ macros instead of hardcoded numbers.
a
On 3/4/21 4:21 PM, Christophe Leroy wrote:
Le 04/03/2021 à 11:13, Ravi Bangoria a écrit :
On 3/4/21 1:02 PM, Christophe Leroy wrote:
Le 04/03/2021 à 06:05, Ravi Bangoria a écrit :
As per ISA 3.1, prefixed instruction should not cross 64-byte
boundary. So don't allow Uprobe
On 3/4/21 1:02 PM, Christophe Leroy wrote:
Le 04/03/2021 à 06:05, Ravi Bangoria a écrit :
As per ISA 3.1, prefixed instruction should not cross 64-byte
boundary. So don't allow Uprobe on such prefixed instruction.
There are two ways probed instruction is changed in mapped pages.
First
@@ -41,6 +41,14 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe,
if (addr & 0x03)
return -EINVAL;
+ if (!IS_ENABLED(CONFIG_PPC64) || !cpu_has_feature(CPU_FTR_ARCH_31))
+ return 0;
Sorry, I missed this last time, but I think we can drop
. In this case just print an error and continue.
Signed-off-by: Ravi Bangoria
---
v2:
https://lore.kernel.org/r/20210204104703.273429-1-ravi.bango...@linux.ibm.com
v2->v3:
- Drop restriction for Uprobe on suffix of prefixed instruction.
It needs lot of code change including generic code but w
99da74333b ("powerpc/sstep: Support VSX vector paired storage access
instructions")
Signed-off-by: Jordan Niethe
Yikes!
Reviewed-by: Ravi Bangoria
On 2/4/21 6:38 PM, Naveen N. Rao wrote:
On 2021/02/04 04:17PM, Ravi Bangoria wrote:
Don't allow Uprobe on 2nd word of a prefixed instruction. As per
ISA 3.1, prefixed instruction should not cross 64-byte boundary.
So don't allow Uprobe on such prefixed instruction as well.
There are two
On 2/4/21 9:42 PM, Naveen N. Rao wrote:
On 2021/02/04 06:38PM, Naveen N. Rao wrote:
On 2021/02/04 04:17PM, Ravi Bangoria wrote:
Don't allow Uprobe on 2nd word of a prefixed instruction. As per
ISA 3.1, prefixed instruction should not cross 64-byte boundary.
So don't allow Uprobe
On 2/4/21 6:45 PM, Naveen N. Rao wrote:
On 2021/02/04 04:19PM, Ravi Bangoria wrote:
On 2/4/21 4:17 PM, Ravi Bangoria wrote:
Don't allow Uprobe on 2nd word of a prefixed instruction. As per
ISA 3.1, prefixed instruction should not cross 64-byte boundary.
So don't allow Uprobe
On 2/6/21 11:36 PM, Oleg Nesterov wrote:
On 02/04, Ravi Bangoria wrote:
+static int get_instr(struct mm_struct *mm, unsigned long addr, u32 *instr)
+{
+ struct page *page;
+ struct vm_area_struct *vma;
+ void *kaddr;
+ unsigned int gup_flags = FOLL_FORCE
On 2/4/21 4:17 PM, Ravi Bangoria wrote:
Don't allow Uprobe on 2nd word of a prefixed instruction. As per
ISA 3.1, prefixed instruction should not cross 64-byte boundary.
So don't allow Uprobe on such prefixed instruction as well.
There are two ways probed instruction is changed in mapped
. But because Uprobe is invalid, entire mmap() operation can
not be stopped. In this case just print an error and continue.
Signed-off-by: Ravi Bangoria
---
v1: http://lore.kernel.org/r/20210119091234.76317-1-ravi.bango...@linux.ibm.com
v1->v2:
- Instead of introducing new arch hook from verify_opc
On 1/28/21 10:50 PM, Naveen N. Rao wrote:
On 2021/01/15 11:46AM, Ravi Bangoria wrote:
Compiling kernel with -Warray-bounds throws below warning:
In function 'emulate_vsx_store':
warning: array subscript is above array bounds [-Warray-bounds]
buf.d[2] = byterev_8(reg->
ested-by: Naveen N. Rao
Signed-off-by: Ravi Bangoria
---
v1: http://lore.kernel.org/r/20210115061620.692500-1-ravi.bango...@linux.ibm.com
v1->v2:
- Change code only in the affected block
arch/powerpc/lib/sstep.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git
On 1/19/21 10:56 PM, Oleg Nesterov wrote:
On 01/19, Ravi Bangoria wrote:
Probe on 2nd word of a prefixed instruction is invalid scenario and
should be restricted.
I don't understand this ppc-specific problem, but...
So far (upto Power9), instruction size was fixed - 4 bytes. But Power10
and continue.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/kernel/uprobes.c | 28
include/linux/uprobes.h | 1 +
kernel/events/uprobes.c | 8
3 files changed, 37 insertions(+)
diff --git a/arch/powerpc/kernel/uprobes.c b/arch/powerpc/kernel
'union vsx_reg buf' into an array.
Also consider function argument 'union vsx_reg *reg' as array instead
of pointer because callers are actually passing an array to it.
Fixes: af99da74333b ("powerpc/sstep: Support VSX vector paired storage access
instructions")
Signed-off-by: Ravi Ban
the
new member to be added only at the end. i.e. we can allow
nested guest even when L0 hv_guest_state.version > L1
hv_guest_state.version. Though, the other way around is not
possible.
Signed-off-by: Ravi Bangoria
Reviewed-by: Fabiano Rosas
---
arch/powerpc/include/asm/hvcall.h |
Introduce KVM_CAP_PPC_DAWR1 which can be used by Qemu to query whether
kvm supports 2nd DAWR or not. The capability is by default disabled
even when the underlying CPU supports 2nd DAWR. Qemu needs to check
and enable it manually to use the feature.
Signed-off-by: Ravi Bangoria
't rename KVM_REG_PPC_DAWR, it's an uapi macro
- patch #3: Increment HV_GUEST_STATE_VERSION
- Split kvm and selftests patches into different series
- Patches rebased to paulus/kvm-ppc-next (cf59eb13e151) + few
other watchpoint patches which are yet to be merged in
paulus/kvm-ppc-next.
Ravi Bangoria (
Power10 is introducing second DAWR. Use real register names (with
suffix 0) from ISA for current macros and variables used by kvm.
One exception is KVM_REG_PPC_DAWR. Keep it as it is because it's
uapi so changing it will break userspace.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm
kvm code assumes single DAWR everywhere. Add code to support 2nd DAWR.
DAWR is a hypervisor resource and thus H_SET_MODE hcall is used to set/
unset it. Introduce new case H_SET_MODE_RESOURCE_SET_DAWR1 for 2nd DAWR.
Also, kvm will support 2nd DAWR only if CPU_FTR_DAWR1 is set.
Signed-off-by: Ravi
: 30df74d67d48 ("powerpc/watchpoint/xmon: Support 2nd DAWR")
Signed-off-by: Ravi Bangoria
---
arch/powerpc/xmon/xmon.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 55c43a6c9111..5559edf36756 100644
--- a/arch/powerpc/xmon/xmon.c
ementioned instructions to make sure that they
are treated as unknown by the emulation infrastructure when
RA = 0 or RA = RT. The kernel will then fallback to executing
the instruction on hardware.
Signed-off-by: Sandipan Das
For the series:
Reviewed-by: Ravi Bangoria
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index 855457ed09b5..25a5436be6c6 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -2157,11 +2157,15 @@ int analyse_instr(struct instruction_op *op, const
struct pt_regs *regs,
case 23: /* lwzx */
Introduce KVM_CAP_PPC_DAWR1 which can be used by Qemu to query whether
kvm supports 2nd DAWR or not.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/kvm/powerpc.c | 3 +++
include/uapi/linux/kvm.h | 1 +
2 files changed, 4 insertions(+)
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc
ame KVM_REG_PPC_DAWR, it's an uapi macro
- patch #3: Increment HV_GUEST_STATE_VERSION
- Split kvm and selftests patches into different series
- Patches rebased to paulus/kvm-ppc-next (cf59eb13e151) + few
other watchpoint patches which are yet to be merged in
paulus/kvm-ppc-next.
Ravi Bangoria
the
new member to be added only at the end. i.e. we can allow
nested guest even when L0 hv_guest_state.version > L1
hv_guest_state.version. Though, the other way around is not
possible.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/hvcall.h | 17 +++--
arch/powerpc/
kvm code assumes single DAWR everywhere. Add code to support 2nd DAWR.
DAWR is a hypervisor resource and thus H_SET_MODE hcall is used to set/
unset it. Introduce new case H_SET_MODE_RESOURCE_SET_DAWR1 for 2nd DAWR.
Also, kvm will support 2nd DAWR only if CPU_FTR_DAWR1 is set.
Signed-off-by: Ravi
Power10 is introducing second DAWR. Use real register names (with
suffix 0) from ISA for current macros and variables used by kvm.
One exception is KVM_REG_PPC_DAWR. Keep it as it is because it's
uapi so changing it will break userspace.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm
+ 64B' generates an address that has a carry into bit
52 (crosses 2K boundary)
Handle such spurious exception by considering them as extraneous and
emulating/single-steeping instruction without generating an event.
Signed-off-by: Ravi Bangoria
[Fixed build warning reported by kernel test
Hi Michael,
+static void __init fixup_cpu_features(void)
+{
+ unsigned long version = mfspr(SPRN_PVR);
+
+ if ((version & 0x) == 0x00800100)
+ cur_cpu_spec->cpu_features |= CPU_FTR_POWER10_DD1;
+}
+
static int __init early_init_dt_scan_cpus(unsigned long
+static void __init fixup_cpu_features(void)
+{
+ unsigned long version = mfspr(SPRN_PVR);
+
+ if ((version & 0x) == 0x00800100)
+ cur_cpu_spec->cpu_features |= CPU_FTR_POWER10_DD1;
+}
+
I am just wondering why this is needed here, but the same thing is not
On 10/22/20 10:41 AM, Jordan Niethe wrote:
On Thu, Oct 22, 2020 at 2:40 PM Ravi Bangoria
wrote:
POWER10_DD1 feature flag will be needed while adding
conditional code that applies only for Power10 DD1.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/cputable.h | 8
+ 64B' generates an address that has a carry into bit
52 (crosses 2K boundary)
Handle such spurious exception by considering them as extraneous and
emulating/single-steeping instruction without generating an event.
Signed-off-by: Ravi Bangoria
[Fixed build warning reported by kernel test
POWER10_DD1 feature flag will be needed while adding
conditional code that applies only for Power10 DD1.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/cputable.h | 8 ++--
arch/powerpc/kernel/dt_cpu_ftrs.c | 3 +++
arch/powerpc/kernel/prom.c | 9 +
3 files
POWER10_DD1 feature flag will be needed while adding
conditional code that applies only for Power10 DD1.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/cputable.h | 8 ++--
arch/powerpc/kernel/dt_cpu_ftrs.c | 3 +++
arch/powerpc/kernel/prom.c | 9 +
3 files
+ 64B' generates an address that has a carry into bit
52 (crosses 2K boundary)
Handle such spurious exception by considering them as extraneous and
emulating/single-steeping instruction without generating an event.
Signed-off-by: Ravi Bangoria
---
Dependency: VSX-32 byte emulation support
Hi Daniel,
On 10/12/20 7:14 PM, Daniel Axtens wrote:
Hi,
To review this, I looked through the supported instructions to see if
there were any that I thought might have been missed.
I didn't find any other v3.1 ones, although I don't have a v3.1 ISA to
hand so I was basically looking for
Hi Daniel,
On 10/12/20 7:21 AM, Daniel Axtens wrote:
Hi,
Apologies if this has come up in a previous revision.
case 1:
+ if (!cpu_has_feature(CPU_FTR_ARCH_31))
+ return -1;
+
prefix_r = GET_PREFIX_R(word);
ra =
: PASS
emulate_step_test: pstxvp : PASS
Signed-off-by: Balamuruhan S
Signed-off-by: Ravi Bangoria
---
arch/powerpc/lib/test_emulate_step.c | 270 +++
1 file changed, 270 insertions(+)
diff --git a/arch/powerpc/lib/test_emulate_step.c
b/arch/powerpc/lib
)
* Store VSX Vector Paired Indexed (stxvpx)
* Prefixed Store VSX Vector Paired (pstxvp)
Suggested-by: Naveen N. Rao
Signed-off-by: Balamuruhan S
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/ppc-opcode.h | 13 +
1 file changed, 13 insertions(+)
diff --git a/arch
(plxvp)
* Store VSX Vector Paired (stxvp)
* Store VSX Vector Paired Indexed (stxvpx)
* Prefixed Store VSX Vector Paired (pstxvp)
Suggested-by: Naveen N. Rao
Signed-off-by: Balamuruhan S
Signed-off-by: Ravi Bangoria
[kernel test robot reported a build failure]
Reported-by: kernel test robot
fixed load/stores")
Signed-off-by: Ravi Bangoria
---
arch/powerpc/lib/sstep.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index e6242744c71b..faf0bbf3efb7 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/l
erpc sstep: Add support for prefixed load/stores")
Signed-off-by: Balamuruhan S
Signed-off-by: Ravi Bangoria
---
arch/powerpc/lib/sstep.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index e9dcaba9a4f8..e6242744c71b 100644
--
: Add testcases for VSX vector paired load/store
instructions
Ravi Bangoria (1):
powerpc/sstep: Cover new VSX instructions under CONFIG_VSX
arch/powerpc/include/asm/ppc-opcode.h | 13 ++
arch/powerpc/lib/sstep.c | 160 ---
arch/powerpc/lib/test_emulate_step.c
: PASS
emulate_step_test: pstxvp : PASS
Signed-off-by: Balamuruhan S
Signed-off-by: Ravi Bangoria
---
arch/powerpc/lib/test_emulate_step.c | 270 +++
1 file changed, 270 insertions(+)
diff --git a/arch/powerpc/lib/test_emulate_step.c
b/arch/powerpc/lib
)
* Store VSX Vector Paired Indexed (stxvpx)
* Prefixed Store VSX Vector Paired (pstxvp)
Suggested-by: Naveen N. Rao
Signed-off-by: Balamuruhan S
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/ppc-opcode.h | 13 +
1 file changed, 13 insertions(+)
diff --git a/arch
(plxvp)
* Store VSX Vector Paired (stxvp)
* Store VSX Vector Paired Indexed (stxvpx)
* Prefixed Store VSX Vector Paired (pstxvp)
Suggested-by: Naveen N. Rao
Signed-off-by: Balamuruhan S
Signed-off-by: Ravi Bangoria
---
arch/powerpc/lib/sstep.c | 146 +--
1
VSX vector paired instructions operates with octword (32-byte)
operand for loads and stores between storage and a pair of two
sequential Vector-Scalar Registers (VSRs). There are 4 word
instructions and 2 prefixed instructions that provides this
32-byte storage access operations - lxvp, lxvpx,
From: Balamuruhan S
Unconditional emulation of prefixed instructions will allow
emulation of them on Power10 predecessors which might cause
issues. Restrict that.
Signed-off-by: Balamuruhan S
Signed-off-by: Ravi Bangoria
---
arch/powerpc/lib/sstep.c | 6 ++
1 file changed, 6 insertions
On 9/17/20 6:54 PM, Rogerio Alves wrote:
On 9/2/20 1:29 AM, Ravi Bangoria wrote:
Patch #1 fixes issue for quardword instruction on p10 predecessors.
Patch #2 fixes issue for vector instructions.
Patch #3 fixes a bug about watchpoint not firing when created with
ptrace
Hi Paul,
On 9/2/20 8:02 AM, Paul Mackerras wrote:
On Thu, Jul 23, 2020 at 03:50:51PM +0530, Ravi Bangoria wrote:
Patch #1, #2 and #3 enables p10 2nd DAWR feature for Book3S kvm guest. DAWR
is a hypervisor resource and thus H_SET_MODE hcall is used to set/unset it.
A new case
Hi Paul,
diff --git a/arch/powerpc/include/asm/hvcall.h
b/arch/powerpc/include/asm/hvcall.h
index 33793444144c..03f401d7be41 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -538,6 +538,8 @@ struct hv_guest_state {
s64 tb_offset;
u64
Hi Paul,
On 9/2/20 7:19 AM, Paul Mackerras wrote:
On Thu, Jul 23, 2020 at 03:50:52PM +0530, Ravi Bangoria wrote:
Power10 is introducing second DAWR. Use real register names (with
suffix 0) from ISA for current macros and variables used by kvm.
Most of this looks fine, but I think we should
Hi Paul,
On 9/2/20 7:31 AM, Paul Mackerras wrote:
On Thu, Jul 23, 2020 at 03:50:53PM +0530, Ravi Bangoria wrote:
kvm code assumes single DAWR everywhere. Add code to support 2nd DAWR.
DAWR is a hypervisor resource and thus H_SET_MODE hcall is used to set/
unset it. Introduce new case
, Kernel Access Userspace, len: 1: Ok
success: ptrace-hwbreak
Suggested-by: Pedro Miraglia Franco de Carvalho
Signed-off-by: Ravi Bangoria
---
.../selftests/powerpc/ptrace/ptrace-hwbreak.c | 48 ++-
1 file changed, 46 insertions(+), 2 deletions(-)
diff --git a/tools/testing
that availability of 2nd DAWR is
independent of this flag and should be checked using
ppc_debug_info->num_data_bps.
Signed-off-by: Ravi Bangoria
---
Documentation/powerpc/ptrace.rst | 1 +
arch/powerpc/include/uapi/asm/ptrace.h| 1 +
arch/powerpc/kernel/ptrace/ptrace-noadv.c | 2 ++
3 fi
, hw_len needs to be set
directly.
Fixes: b57aeab811db ("powerpc/watchpoint: Fix length calculation for unaligned
target")
Signed-off-by: Ravi Bangoria
---
arch/powerpc/kernel/ptrace/ptrace-noadv.c | 1 +
arch/powerpc/xmon/xmon.c | 1 +
2 files changed, 2 insertions(+)
dware breakpoints rewrite to handle non DABR
breakpoint registers")
Reported-by: Pedro Miraglia Franco de Carvalho
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/hw_breakpoint.h | 3 ++
arch/powerpc/kernel/process.c | 48 +++
arch/powerpc/kernel/ptr
CONFIG_HAVE_HW_BREAKPOINT is not set.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/hw_breakpoint.h | 8 +
arch/powerpc/kernel/Makefile | 3 +-
arch/powerpc/kernel/hw_breakpoint.c | 159 +
.../kernel/hw_breakpoint_constraints.c
really leak
any kernel address in signal info. Setting HW_BRK_TYPE_PRIV_ALL will
also help to find scenarios when kernel accesses user memory.
Reported-by: Pedro Miraglia Franco de Carvalho
Suggested-by: Pedro Miraglia Franco de Carvalho
Signed-off-by: Ravi Bangoria
---
arch/powerpc/kernel
one watchpoint")
Signed-off-by: Ravi Bangoria
---
arch/powerpc/kernel/hw_breakpoint.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/kernel/hw_breakpoint.c
b/arch/powerpc/kernel/hw_breakpoint.c
index 9f7df1c37233..f6b24838ca3c 100644
--- a/arch/powerpc/kernel/hw_breakpoi
it as extraneous and emulate/single-step it
before continuing.
Reported-by: Pedro Miraglia Franco de Carvalho
Fixes: 74c6881019b7 ("powerpc/watchpoint: Prepare handler to handle more than
one watchpoint")
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/hw_breakpoint.h | 1 +
ar
act if condition, suggested by Christophe
Ravi Bangoria (8):
powerpc/watchpoint: Fix quarword instruction handling on p10
predecessors
powerpc/watchpoint: Fix handling of vector instructions
powerpc/watchpoint/ptrace: Fix SETHWDEBUG when
CONFIG_HAVE_HW_BREAKPOINT=N
powerpc/watchpoint: Move D
Hi Christophe,
+static int cache_op_size(void)
+{
+#ifdef __powerpc64__
+ return ppc64_caches.l1d.block_size;
+#else
+ return L1_CACHE_BYTES;
+#endif
+}
You've got l1_dcache_bytes() in arch/powerpc/include/asm/cache.h to do that.
+
+void wp_get_instr_detail(struct pt_regs *regs,
1 - 100 of 558 matches
Mail list logo