Hi Vasily,

I see that for different ‘syscalls' muslc uses different routes but with the same end. For 
example, SYS_ioctl from __stdout_write uses __syscall, provided by 
libmuslc/arch/<arch>/syscall_arch.h (implemented as CALL_SYSINFO). In the same time, 
syscall_cp function from __stdio_write is implemented inside 
libmuslc/src/internal/<arch>/syscall.s which  uses __sysinfo in turn. So, that 
question is why?

The reason all syscalls are directed through __sysinfo is that the 'service' 
that implements our syscalls is frequently ourselves (for example 
libsel4muslcsys). However, this service is C code that would also like to use 
parts of a C library. If muslc called directly into mulscsys then there would 
be a circular build dependency, having a late binding through __sysinfo 
resolves this.

The second question is about the kernel. Now I am using static configuration of 
TLB in kerne: I have one region for rootserver aligned to 2MB, and one region 
for boot info page (also aligned). Thus, I do not have tlb traps since 
everything is in the TLB. In future, of course, I should add a processing of 
TLB misses somewhere in the plat/arch part of kernel. I am not familiar with 
memory support in seL4 yet, is there any syscall for delivery tlb traps? (I see 
nothing similar)  Is there any check inside, for example, SEND call, about a 
presence of page in TLB?

Our current architectures (arm and x86) have hardware loaded TLBs and so this 
is a non issue for them. I am not sure how software loaded TLBs would best be 
supported on seL4. Unfortunately the best person to answer this, Dr Kevin 
Elphinstone, is away at the moment.

Adrian

________________________________

The information in this e-mail may be confidential and subject to legal 
professional privilege and/or copyright. National ICT Australia Limited accepts 
no liability for any damage caused by this email or its attachments.
_______________________________________________
Devel mailing list
[email protected]
https://sel4.systems/lists/listinfo/devel

Reply via email to