pkarashchenko commented on a change in pull request #5758:
URL: https://github.com/apache/incubator-nuttx/pull/5758#discussion_r829092921
##########
File path: arch/risc-v/src/common/riscv_cpuindex.c
##########
@@ -48,10 +50,14 @@
*
****************************************************************************/
-int up_cpu_index(void)
+uintptr_t riscv_cpuindex(void)
Review comment:
Let's rename to `riscv_mhartid()`. What do you think?
##########
File path: arch/risc-v/src/common/riscv_cpuindex.c
##########
@@ -48,10 +50,14 @@
*
****************************************************************************/
-int up_cpu_index(void)
+uintptr_t riscv_cpuindex(void)
{
- int mhartid;
+ return READ_CSR(mhartid);
+}
- asm volatile ("csrr %0, mhartid": "=r" (mhartid));
- return mhartid;
+#ifdef CONFIG_SMP
+int up_cpu_index(void)
+{
+ return (int)riscv_cpuindex();
Review comment:
This can be a solution. Sorry I'm not fully aware about the values that
`mhartid` can carry, so I'm just interested can we go with
```
int up_cpu_index(void)
{
uintptr_t mhartid = READ_CSR(mhartid);
return mhartid;
}
```
or even with
```
int up_cpu_index(void)
{
return READ_CSR(mhartid);
}
```
?? do we need the upper bits? because if the values are `0..50` for example,
then maybe we can reuse existing API and not introduce `riscv_cpuindex`.
##########
File path: arch/risc-v/src/mpfs/Make.defs
##########
@@ -55,19 +56,25 @@ CHIP_CSRCS += mpfs_irq.c mpfs_irq_dispatch.c
CHIP_CSRCS += mpfs_lowputc.c mpfs_serial.c
CHIP_CSRCS += mpfs_start.c mpfs_timerisr.c
CHIP_CSRCS += mpfs_gpio.c mpfs_systemreset.c
+CHIP_CSRCS += mpfs_plic.c
ifeq ($(CONFIG_MPFS_DMA),y)
CHIP_CSRCS += mpfs_dma.c
endif
ifeq ($(CONFIG_BUILD_PROTECTED),y)
+CHIP_CSRCS += mpfs_userspace.c
+CMN_UASRCS += riscv_signal_handler.S
+endif
+
+ifeq ($(CONFIG_BUILD_KERNEL),y)
+
+endif
Review comment:
```suggestion
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]