Re: git retrieval with git blocked
Hello Joel, The best way, I found, to use RSB behind restrictive firewalls was to follow these instructions: http://www.emilsit.net/blog/archives/how-to-use-the-git-protocol-through-a-http-connect-proxy/ It essentially forces all git traffic to go through a SOCKS proxy. This works well if the firewall doesn't block SSH connections to the outside world. However, it can be a bit slow. Best Regards, Cláudio On Mon, Feb 13, 2017 at 5:49 PM Joel Sherrillwrote: > Hi > > I am helping someone with git and ftp blocked by a firewall. That > definitely > limits the options. We used github for some clones using https. > > + Does rtems.org support https clone? > + Can we do snapshots from git.rtems.org like github? > > Any RSB hints based for the firewall situation? > > Thanks. > > --joel > ___ > devel mailing list > devel@rtems.org > http://lists.rtems.org/mailman/listinfo/devel ___ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel
Re: BSP requests.
Hello Amar, I was also able to build sparc/sis in a Fedora 18 (Core 2 Duo @ 2.93 Ghz). Build time was about 1m10s. However, I had to update waf; the version available in Fedora doesn't seem to work with RTEMS' waf. Please also add the arm/beagle bsp. Thanks, Best Regards, Cláudio On Mon Feb 09 2015 at 17:22:45 Amar Takhar a...@rtems.org wrote: Does anyone have a request for a work BSP in the waf build? At one point I had all 168 working this was a year or so ago. I will be slowly updating all the current BSPs as time permits. If there is one you wish to see earlier please let me know it does not matter what order I fix them in. Amar. ___ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel ___ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel
Re: Rehosting Update
Hi, Password recovery is working correctly, but I can't access any patch attached to a ticket. This feature seems to be broken. Cláudio On Thu Nov 20 2014 at 15:54:58 Joel Sherrill joel.sherr...@oarcorp.com wrote: Hi https://devel.rtems.org/ is now up with our Trac instance. As mentioned in a previous update, this merges Bugzilla and Mediawiki. All the old content is there but it has likely moved around. If you spot something broken or out of date, fix it or report it. Old bugzilla accounts have been migrated to Trac but the passwords all need to be reset. The accounts the account portion of an email so for u...@example.com, it would be user. Mine is joel.sherrill since I use joel.sherr...@oarcorp.com. Passwords must be reset emails will go to the registered account email. If anyone has problems, email a...@rtems.org. -- Joel Sherrill, Ph.D. Director of Research Development joel.sherr...@oarcorp.comOn-Line Applications Research Ask me about RTEMS: a free RTOS Huntsville AL 35805 Support Available(256) 722-9985 ___ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel ___ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel
Re: MSc (by research) involving RTEMS | University of York
Hi Hesham, There are several commercially available MPSoCs: - MPPA from Kalray - TILE 64 from Tilera and it's Radiation Tolerant version Maestro - Xentium from Recore Systems but you need to investigate their OS support. Most of them should support some form of Linux SMP. Xentium supports RTEMS, but solely in the Leon core used to drive the system. These were used in several projects, so a simple google search should yield some interesting results. In terms of research, there are several EC funded projects that research time-predictability and MPSoCs/CMPs. I will leave links to a few of them as they might be of interest to your research: - T-CREST Time-Predictable Multi-Core Architecture for Embedded Systems: http://www.t-crest.org/ - P-SOCRATES (Parallel Software Framework for Time-Critical Many-core Systems): http://www.p-socrates.eu/ - PREDATOR: http://www.predator-project.eu/ - MERASA (Multi-Core Execution of Hard Real-Time Applications Supporting Analysability): http://ginkgo.informatik.uni-augsburg.de/merasa-web/ -parMERASA (Multi-Core Execution of Parallelised Hard Real-Time Applications Supporting Analysability): - CompSOC: http://compsoc.eu/ Although none of them focuses on the operating system itself, some include operating system ports and reason about the issues found in these kinds of systems. The T-CREST project includes a fully open source MPSoC (https://github.com/t-crest/) that can be instantiated in a Xilinx platform. In addition, RTEMS is ported to the T-CREST MPSoC (https://github.com/t-crest/rtems) but solely in pure AMP configurations (fully decoupled and independent RTEMS instances running in each core). Nevertheless, I think adding support for RTEMS' multiprocessor extensions should be fairly easy. On the other hand, P-SOCRATES' objective is to develop a software stack aimed at bridging the gap between the application design and an hardware many-core platform. I guess this includes scheduling and operating system mappings, so it should be more aligned with your research. Best Regards, Cláudio On Mon, Oct 27, 2014 at 10:04 AM, Hesham Moustafa heshamelmat...@gmail.com wrote: Hi all, This year, I am studying MSc (by research) degree at the University of York. My thesis proposal title is REAL-TIME OPERATING SYSTEMS FOR LARGE SCALE MANY-CORE NETWORK-ON-CHIP ARCHITECTURES. Part of this research will include some work with RTEMS. That said, I'd appreciate any materials (papers, publications, references, tutorials, etc) that might be of help regarding that topic and may or may not relate to RTEMS. I think Sebastian has contributed a lot to this area recently. You may also want to suggest building some simple multi-processor and/or many-core systems that RTEMS currently supports, and how to simulate them. Thanks, Hesham ___ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel ___ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel
Re: Including paravirtualization headers in RTEMS
Hello, I agree with you. POK should define a virtual CPU and RTEMS should support this POK virtual CPU interface like it does with any other normal architecture. Instead of writing directly to protected registers, RTEMS setups arguments and executes a trap. As a result, the functionality from the previously discussed abstraction layer can be implemented inside the hardware dependent sections of RTEMS. My only question arises when we consider other interfaces that are not directly related with CPU/Timer/Interrupt virtualization, but are nonetheless necessary. A simple example of this would be the ports, error reporting and partition state management interfaces that are provided by most partitioning kernels. This functionality must be exposed to the user application but RTEMS has no equivalent API defined (in score/sapi) because most of these APIs only make sense in a partitioned environment. We could implement them inside the BSP but that feels like an hack. Alternatively, we could implement them as a simple RTEMS lib. Cláudio On Mon, May 26, 2014 at 1:36 AM, Chris Johns chr...@rtems.org wrote: On 26/05/2014 4:39 am, Cláudio Silva wrote: Sorry all, miss send. Returning to where I was: Yes, Partitions are fully linked executables created off-line. No linking occurs in run-time. The partition elf is simply loaded in to memory by a partition loader or by the bootloader. Typically, you have a thin abstraction layer defining simple functions that will trap into the partitioning kernel. Something like: https://github.com/t-crest/ospat/blob/master/libpok/middleware/portqueueingsend.c Thanks. This lines up with my understand. I think RTEMS needs to view POK or any virtual environment as a black box and we cannot assume anything about the environment other than the ABI presented to us. We cannot look through to the host system and do things as a result. Maintaining a strict black box view may not be possible however we should start with the black box and move to a position that is practical and possible. We should not borrow (reference in the POK source tree) header files to pick up common definitions directly or indirectly via static libraries. Any interfaces defined by the partition's host need to be formally exported from the project and made available. This defines the ABI we are working against. I see POK is highly configurable so this complicates what is exported. If I build POK with port queueing enabled and build a guest that uses it then I reconfigure POK and remove port queueing however I do not rebuild the guest will POK detect the problem and handle the mismatch ? In POK + RTEMS case, this library is being compiled by POK's compiler and then linked to each partitions' executable. When this partition is an RTEMS one, the remaining binaries (RTEMS+Application) were compiled with the RTEMS' compiler. Therefore you are linking two sets of binaries produced by different compilers. We cannot do this. Given the type of applications expected to use POK I would expect an audit to flag this as a fail. There are a number of reasons why this is not a sustainable path forward, the most simple one is having to have code in the RTEMS partition built with a compiler we do not control. In this example the dependency is backwards, for example will the POK project make sure its compiler can always build the interface and work with RTEMS, ie all tests pass. I suspect the POK maintainers will have limited or no real interest in RTEMS's development process and I see this as normal and expected. I do not see this path as being sustainable and I do not support it. As a hack to get things going it has been perfectly fine. I guess this is also true for the recently submitted Xtratum BSP. Current Xtratum BSP is not going to happen so lets put that one to one side. From the RTEMS point of view I see POK as no different to any architecture we support. We support running on hard IP in a device, for example a Blackfin, a mix of hard IP and soft IP, for example a Zynq, soft IP, for example a NIOS2, hard IP (or mixed soft IP) and software intervention in a virtual environment, for example POK, and full software emulation via simulation, for example qemu. For RTEMS to support an architecture we need separation so each part can vary in version, the version dependencies flow down to RTEMS and not the other direction, and we need a way to test. Testing means we have a stable version of POK built on our test system and we build RTEMS against that. We have several options available: - Let the two binaries (RTEMS + abstraction lib), compiled with different compilers, to be linked together (how it is currently happening) No. - Compile the partitioning kernel abstraction layer with the RTEMS compiler - this will result in a complex configuration and build procedure. This is possible however is depends on how this is implement and where the code resides
Re: Including paravirtualization headers in RTEMS
Hello Chris, Yes, Partitions are fully linked executables created off-line. No linking occurs in run-time. The partition elf is simply loaded in to memory by a partition loader or the bootloader. Typically, you have a thin abstraction layer defining simple functions that will trap into the partitioning kernel. Something like: On Sat, May 24, 2014 at 12:27 AM, Chris Johns chr...@rtems.org wrote: On 23/05/2014 6:46 pm, Philipp Eppelt wrote: On 05/21/2014 03:54 PM, Gedare Bloom wrote: On Wed, May 21, 2014 at 4:04 AM, Christian Mauderer christian.maude...@embedded-brains.de wrote: First of all: Thanks for your comments. You will find answers below. Am 20.05.2014 16:58, schrieb Gedare Bloom: That is correct. It's part of XtratuM. Is there some preferred way of marking such headers? Not that I know of. We have discussed a similar issue with the POK paravirtualization project. The problem is to allow external code linking to RTEMS. The design should be considered carefully and probably discussed in a separate thread. To pick up this discussion, I like first to picture the problem, second to collect possible solutions and third name some known problems with the solutions. Thank you for posting this email. I followed the links and looked at the code and realised I need to ask more questions. In a paravirtualized environment we need to make function calls from our guest system to the host system. The implementation of these functions rely sole on the host system. Are they function calls ? To me a function call assumes some sort of linking process between the guest and the host system either static or via some form of runtime binding, ie a link editor. I also assume a guest is an RTEMS partition executable and the host is the POK microkernel providing the partitioned environment in some sort of protective environment. Hence, at compile time the guest system is missing the implementation of this host specific functions. My understanding is partitions are loaded into an ELF image that POK loads and executes. Is there some sort of linking happening when the partition is written into the ELF image or when it is being loaded ? For me this is the really important because I am wanting to first understand the which pieces of code are compiled into object files and by which compilers and then archived into static libraries and then linked to any RTEMS code compiled with an RTEMS compiler. I do not understand if there are function calls into the POK kernel or syscall type calls which I consider an ABI, ie a trap or VM exception. If I can understand these needs I will be in a better position to help with the remaining topics in this email. Chris To provide the guest system with the function implementation there are a couple of possible solutions. (A) Provide a host library The host compiles a library containing the function implementations including all dependencies. This host library is passed to the guest system, to resolve the `undefined reference` errors during link time. In the case of POK+RTEMS the resulting RTEMS binary is a valid POK partition and can run without further modifications. The recently propose XantruM BSP seems to follow a similar approach. (B) Use function stubs and build a jump table in the host The guest system makes use of function stubs, written into a special section in the binary to build. This section is analysed by the host system to intercept function calls and replace them with the host implementation of the calls. This approach is used in L4Linux and I gave it a shot in L4RTEMS. [1] shows the macro for the function stubs, [2] the hosts resolver part. (C) add your solution Problems with the solutions P(A) Dependencies and foreign code The library is host code, which has to be maintained, which can get out of sync between BSP and host. Licensing is another question. To my knowledge this approach won't get merged upstream. P(B) Support by the host system The host system needs a piece of code, loading the guest binary and providing it with knowledge about the environment (external_resolver). We haven't discussed this approach in detail. I hope, this gives a good overview and we can have a fruitful discussion. Cheers, Philipp [1] https://github.com/phipse/L4RTEMS/blob/master/c/src/lib/libbsp/l4vcpu/pc386/startup/l4lib.h [2] https://github.com/phipse/L4RTEMS/blob/f5a32ed0121b3e5f0ac8ae4c53e636eab2257fb1/l4/pkg/RTEMS_wrapper/server/src/res.c ___ rtems-devel mailing list rtems-devel@rtems.org http://www.rtems.org/mailman/listinfo/rtems-devel ___ rtems-devel mailing list rtems-devel@rtems.org http://www.rtems.org/mailman/listinfo/rtems-devel ___ rtems-devel mailing list rtems-devel@rtems.org http://www.rtems.org/mailman/listinfo/rtems-devel
Re: [GSOC] Paravirtualization Layer in RTEMS.
Hello Youren, Welcome to the RTEMS community. I hope you have a successful RTEMS summer of code :) Virtualization of RTEMS on POK is a complex mid-term project with several people involved, therefore, lets make an effort to keep the RTEMS wiki updated with new developments and design decisions. Regarding the interrupt virtualization, lets keep the design simple as possible. I think we should mimic the behaviour of the underlying hardware interrupt management as it makes the paravirtualization easier. To summarize, the interrupt virtualization layer shall: - forward interrupts to the current executing partition in user mode; - allow a partition to disable/enable virtual interrupts; - allow a partition to mask/unmask virtual interrupts; - track pending interrupts; - forward pending interrupts as soon as the partition enables or unmasks interrupts ; I doubt you will be able to find an POK mechanism that can be adapted to achieve this. It needs to be developed departing from Phillip's work last year. Could you try to summarize the design you have in mind in a blog post or wiki page? Maybe with some simple diagrams explaining the behavior (how to register a virtual interrupt, how do you keep interrupts pending, where and how to store the interrupt context, etc). Thanks, Best Regards, Cláudio On Sun, Apr 27, 2014 at 9:44 AM, Youren Shen shenyou...@gmail.com wrote: Hi, Gedare, Philipp and Cláudio Silva: I'm very glad to be accepted by GSoC, and that's not only an honor but also an opportunity to change our idea and design to become code. Hi, Cláudio , I'm a junior student from china, and my major in college is embedded system. Nice to meet you. OK, Cliché should not be long, Let's focus on the project. I have discussed with the Philipp about the jobs should be done in this summer. Thanks to Phillpp's strict adherence, we have already get a clear design. I have put a brief outline on my blog[1]. But if you don't mind, please let me introduced it now: Thanks to Phillpp's contributions last year about the paravirtualization layer, we could run RTEMS application successfully on POK already, which means we can focus on CPU virtualization this year. And I'd like continue the work based on the Paravirtualization Layer implement by Phillpp last year. By the way, the job last year was really creative and great. Because of lacking time and complicated task, The job was not get the purpose eventually, I hope this year, I can make it with the guide of Philpp and other mentors. This year, the jobs includeing two aspect, one is implement a Hypercall system, the other is develop a interrupt handling system in POK with the function we design in Hypercall system. The implementation of Hypercall system will not be difficult, but we should take heed about the reusability and flexibility of the code, since we certainly will reuse the job and extend the functionality of Hypercall system. In the other hand, we need to design a interrupt dandling system to delivery the interrupt. As we discussed, the following should be the workflow of interruption: 1. When the RTEMS startup, It should register the interrupt by hypercall, then we will bond the corresponding interruption to this partition that the RTEMS locals in. 2. When an interrupt(or events including traps like a floating point error), the POK will intercept the interrupt, do some common handler and decide to whether delivery it or not by the type of this interrupt.For example, if the time interrupt occurs, the POK will deal it in kernel, but then send a dummy clock interrupt to every RTEMS. 3. If a interrupt should be delivery to Partitions(RTEMS), then the POK kernel compact the context of the interruption, and send the package of interrupt to corresponding Partitions, then mask a bit in some mechanism, to notify the Partitions that there happens a interrupt when you are suspended. 4. When the Partitions being evoked, it should check the mask bit, if the mask bit is settled, then using the native interrupt handler in RTEMS(but should change some sensitive function by Hypercall in VIrtualization Layer). So, as we can see, in this section, we need to design a system to save and delivery the interruptions.This system is called event channel in XEN, and Phillpp named it notify, which is more accurate(because there are no necessary to implement a complicated system like event channel). About this part, I have upload a blog to describe, and I have conclude the to-dos as following: 1. Build a structure to store the context of interrupt. 2. Build a structure to mark that there is an event pending. 3. When the partition(RTEMS) resume, check the value of the bit. 4. Design a callback function mechanism. Including the operation of register an event and callback function. The register of events should bind the corresponding partitions. 5. In the callback function, we should invoke the normal interrupt handler
Re: SPARC Floating Point Context
I agree with your proposal. Detecting the fault immediately is very important. An alternative approach would be to run all tasks with EF set. This would enable integer tasks and libs to freely use FP registers. To spare the memory overhead of allocating all tasks with a FP context, only the FP tasks get their FP context saved. Anyway, it should be documented very explicitly that interrupt handlers, including any calls (libc, user handlers) they make, shall be analyzed in search for accesses to FPU registers. On Thu, Mar 6, 2014 at 8:54 AM, Sebastian Huber sebastian.hu...@embedded-brains.de wrote: On 2014-03-06 08:54, Sebastian Huber wrote: Hello, On 2014-03-05 17:35, Cláudio Silva wrote: Hello Sebastian, Regarding this issue, there was also a old problem where Newer versions of gcc can generate FPU instructions even if no floating point operations are made in the C code (Paraphrasing Jiri): http://www.rtems.org/ml/rtems-users/2006/may/msg00040.htm yes, this is why I ask. I think the current implementation is extremely dangerous. The GCC at least on ARM and PowerPC is pretty aggressive in its use of floating point registers for integer only tasks. For the SPARC, I don't know. Discussed Again in: http://www.rtems.org/ml/rtems-users/2009/september/msg8.html The threads includes other discussions about FP usage in SPARC/RTEMS. I don't know if this is still applicable, but it is information that has been circulating between SPARC/RTEMS users. Anyway the proposed solution was that RTEMS should always be compiled with soft-float or all tasks must be FP. My only comment regarding saving/restoring the FPU context in every interrupt is the performance penalty. How would you do it? Save it on Executing Thread FPU context and then skip the save part if a Context Switch is necessary? Further extend the ISF? My proposal is this: 1. Disable the FPU on interrupt entry, before the high-level code is called. 2. Compile RTEMS with -msoft-float. 3. Remove the FPU context from the thread context. 4. In an interrupt initiated thread dispatch check the PSR_EF bit and in this case save/restore the floating point registers. 5. Add a BSP implemented method which tells the RTEMS kernel if the processor has an FPU. If it has an FPU enable the PSR_EF bit in _CPU_Context_initialize() for floating point tasks. Advantages: * In case the GCC uses the FPU in interrupt handlers, you get a trap and know right now what is wrong and not later if FPU register corruption is noticed. * Interrupt entry is fast since no FPU context needs to be saved. * One library set for a BSP supports FP/non-FP applications. * Works on SMP. Disadvantages: * No deferred floating point context switch. There is one problem with the -msoft-float for RTEMS approach. At link time a multi-lib must be selected to provide the libc, libm, libgcc, etc. In case the floating-point enabled variant is selected, then RTEMS will use e.g. memcpy() compiled with floating-points enabled. So it is possible that the integer only RTEMS library will use floating-point instructions indirectly via library calls. -- Sebastian Huber, embedded brains GmbH Address : Dornierstr. 4, D-82178 Puchheim, Germany Phone : +49 89 189 47 41-16 Fax : +49 89 189 47 41-09 E-Mail : sebastian.hu...@embedded-brains.de PGP : Public key available on request. Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG. ___ rtems-devel mailing list rtems-devel@rtems.org http://www.rtems.org/mailman/listinfo/rtems-devel
Re: [GSoC] Paravirtualization Layer - test on L4Re
Hello, Good Job Phillipp, I always guessed that we would need to tailor a BSP for each hypervisor. Regarding the different architectures, I think we may be able to get a consistent cross-architecture abstraction layer with some optional minor changes due to architecture specific optimizations (i.e. in some architectures it may be possible to isolate privileged instructions inside an function instead of virtualizing the complete function; this is the case of _Context_Switch in SPARC). Regards, Cláudio On Mon, Sep 23, 2013 at 2:22 PM, Philipp Eppelt philipp.epp...@mailbox.tu-dresden.de wrote: Yes, it looks like it. But I think for each architecture, we can share the most parts of the BSP and separate the hypervisor specifics. I don't know much about virtualization on sparc/ppc/arm, so I can't say anything about these. Cheers Philipp On 09/23/2013 03:16 PM, Gedare Bloom wrote: Sounds good. Would it be a BSP for each hypervisor for each target CPU type the hypervisor runs on? -Gedare On Mon, Sep 23, 2013 at 9:10 AM, Philipp Eppelt philipp.epp...@mailbox.tu-dresden.de wrote: Hi, in the last days I reused my work on L4RTEMS to do a quick and dirty test of the new virtualization layer. The implementation -which isn't working yet- showed, that we the i386/virtualpok BSP is a very good point to start, but the vCPU interface of L4Re brings it's own dependencies which must be added to include/ and in Makefile.am. I also had to extend the virtualizationalyerbsp.h file with these includes and a structure shared between L4Re and RTEMS. This struct accommodates a vCPU and console capability and a pointer to the vCPU state. They are filled in at start up by L4Re and can then be used by RTEMS. The take away are two things: First, we might end up with an own BSP for each hypervisor. Second, as far as I can see now, they only differ in aspects of the layer, not in the drivers using the layer. The code isn't on github yet, as I am short on time and have to sort things out first. The obstacle at the moment is to create a library in L4Re, which includes all L4Re dependencies and has only a few undefined references, which can be resolved by RTEMS. Cheers Philipp On 09/20/2013 09:22 AM, Philipp Eppelt wrote: Hi, what did I do in my project? I designed and implemented a virtualization layer, which should ease the virtualization of RTEMS across different hypervisors. To test the layer and because of the ARINC 653 compliance POK was chosen as a proof-of-concept host OS. The project was a partial success. The layer is designed, implemented and a BSP is using it, and it is at least partially working. I didn't succeed in changing POK so it can forward interrupts to partitions reliably. But this is an POK related issue, which I think won't be an issue on a host OS providing a vCPU abstraction. Also implementing this for other architectures might be easier than for x86. A console is printing hello World and sometimes under some circumstances the base_sp sample printed output, too. But the latter is not reliable. I have documented my efforts, including implementation issues, GDB traps and where I left off on the wiki page [0]. Also explanations on how to port the i386/virtualpok BSP to other hypervisors and how to port this approach to other architectures can be found there. The latter is pretty abstract, as I don't know much about the other architectures(arm, ppc, sparc). I provide two patches: * Split of the i386 CPU between score/cpu and libcpu. The interrupt handling was moved to libcpu and two new CPU variants were introduced there: Native and virtual. The native one works like before but the virtual one calls the virtualization layer instead of executing cli,sti or hlt. The list of affected functions is documented in the wiki[0]. BUT: This patch won't be merged, as includes in cpukit from libcpu aren't allowed (but it works). But before the discussion about a new configuration option isn't finished and the option is implemented there is no other way to achieve this. * A new i386 BSP is introduced: virtualpok. It is the corresponding BSP to the virtual i386 CPU model and brings along the virtualization layer as two header files in it's include/ directory. A console driver, clock driver and IRQ management is implemented and as far as possible tested on POK. If you have questions on the work, I'd be happy to answer them. Cheers, Philipp [0] http://wiki.rtems.org/wiki/index.php/GSOC_2013_-_Paravirtualization_of_RTEMS ___ rtems-devel mailing list rtems-devel@rtems.org http://www.rtems.org/mailman/listinfo/rtems-devel ___ rtems-devel mailing list rtems-devel@rtems.org http://www.rtems.org/mailman/listinfo/rtems-devel
Re: [GSOC] POK+RTEMS interrupt status
Hello Philipp, Good Job.This interrupt virtualization part is probably the hardest to implement in the Hypervisor and it's very dependent on the underlying architecture. Actually, I think this is even harder to implement in intel architectures due to its interrupt model and limitations on the ISA. In some architectures you can simply return into the partition ISR handler from the kernel handler, and it's the partition handler responsibility to save its own task context and later return into the task. Anyway, as Cynthia said, document what you have learned so that others (or even yourself) can pick up where you left. Best Regards, Cláudio On Mon, Sep 16, 2013 at 2:41 PM, Philipp Eppelt philipp.epp...@mailbox.tu-dresden.de wrote: Hi, it's suggested pencils down today and I have still trouble with the interrupt handling. I have this functionality: - enable/disable interrupts for a partition - register a handler for a partition - partition needs to acknowledge the interrupt before more a send The kernel handler sends the EOI signal to the pic, but interrupts are not opened before either the user space handler is called (iret) or the interrupt handler returns(sti). The transition from kernel to user space is the hacky part. I copy the interrupted context from the kernel to the user stack. Unfortunately, stack offset for the data isn't constant so I insert a magic value to find the beginning of the data. Then I modify the eip to point to the user space handler. The user space handler recovers the vector number and executes the RTEMS interrupt dispatch routine. Afterwards it sends a system call POK_SYSCALL_IRQ_PARTITION_ACK to signal the kernel, to send further clock interrupts. Then it restores the interrupted context and returns to the point of interruption. Well, it doesn't work. When I enable interrupts through the EFLAGS register and return to the user land, the handler isn't executing. Instead another clock interrupt occurs and we are back in the kernel. Although this interrupt should be very fast (no user handler as there was no ack), it still fires and fires and fires and fires So the user space handler isn't making any progress. I acknowledge the clock irq (0x0) during the kernel handler, because clock ticks invoke pok_sched() and a descheduling of the current task will result in an unacknowledged interrupt. If I leave the interrupts disabled after the iret the RTEMS C_dispatch_isr is called which doesn't result in a missing handler error, so I think my clock driver works. I am out of ideas by now. Looks this reoccurring interrupt behaviour familiar to someone? Any ideas where it comes from? How to fix it? Cheers, Philipp ___ rtems-devel mailing list rtems-devel@rtems.org http://www.rtems.org/mailman/listinfo/rtems-devel
Re: [VirtLayer] Changes to the interrupt handling in POK
Good job Philipp. It seems a nice first iteration for interrupt virtualization. Did you get to try it? If I got it right, the POK part of the interrupt handler (CLOCK_HANDLER ??) is running after the virtualized RTEMS handler? Regards, Cláudio On Sun, Aug 18, 2013 at 12:18 AM, Philipp Eppelt philipp.epp...@mailbox.tu-dresden.de wrote: Hi, lately I worked a lot to extend POK with necessary features to accommodate an RTEMS partition. Since hello world works, I need a way to pass clock ticks to the RTEMS partition running on POK. Therefore the interrupt handling needs to support custom handlers. If everything works out, I can enable the virtualization layer to register an interrupt vector and if it occurs RTEMS is notified by invoking C_dispatch_isr(). Blog: http://phipse.github.io/rtems/blog/2013/08/17/pok-hardware-interrupt-handling/ Source: https://github.com/phipse/pok Cheers, Philipp ___ rtems-devel mailing list rtems-devel@rtems.org http://www.rtems.org/mailman/listinfo/rtems-devel ___ rtems-devel mailing list rtems-devel@rtems.org http://www.rtems.org/mailman/listinfo/rtems-devel
Re: [VirtLayer] Changes to the interrupt handling in POK
Ok. That clarifies the next steps. You are correct; POK should be tested first without RTEMS integration. My question was more about the separation between the RTEMS part of the interrupt handler and the POK part. If RTEMS registers a handler it will monopolize it; i.e. there is no possibility to run a POK handler and then an RTEMS handler. This is required to maintain segregation and to prevent RTEMS from interfering with POK. If RTEMS registers a handler for a vector already user by POK (e.g. clock tick interrupt), POK cannot be overruled and be dependent on RTEMS to run its own code. But for a first iteration it's OK to ignore this. Cláudio On Sun, Aug 18, 2013 at 3:03 PM, Philipp Eppelt philipp.epp...@mailbox.tu-dresden.de wrote: Hi, currently I have no RTEMS partition running on POK, as these changes must first be validated with pure POK partitions. Also I haven't written the code to get a interrupt handler through the partition abstraction. CLOCK_HANDLER is a macro which is incrementing the tick counter and invoking the scheduler in the POK kernel. To register IRQ handlers from a partition, e.g. an RTEMS partition, I need to create a new syscall first, to get the callback function into the meta_handler list. So when RTEMS tries to attach to an interrupt, the virtualization layer implementation of POK is invoked and registers a predefined handler via a syscall with the meta_handler for this irq number. When the interrupt occurs the registered handler is invoked with the vector number as an argument. Then the handler invokes C_dispatch_isr() in RTEMS passing the vector number along. That's the plan, for the next week. Cheers, Philipp On 08/18/2013 02:45 PM, Cláudio Silva wrote: Good job Philipp. It seems a nice first iteration for interrupt virtualization. Did you get to try it? If I got it right, the POK part of the interrupt handler (CLOCK_HANDLER ??) is running after the virtualized RTEMS handler? Regards, Cláudio On Sun, Aug 18, 2013 at 12:18 AM, Philipp Eppelt philipp.epp...@mailbox.tu-dresden.de mailto:philipp.epp...@mailbox.tu-dresden.de wrote: Hi, lately I worked a lot to extend POK with necessary features to accommodate an RTEMS partition. Since hello world works, I need a way to pass clock ticks to the RTEMS partition running on POK. Therefore the interrupt handling needs to support custom handlers. If everything works out, I can enable the virtualization layer to register an interrupt vector and if it occurs RTEMS is notified by invoking C_dispatch_isr(). Blog: http://phipse.github.io/rtems/blog/2013/08/17/pok-hardware-interrupt-handling/ Source: https://github.com/phipse/pok Cheers, Philipp ___ rtems-devel mailing list rtems-devel@rtems.org mailto:rtems-devel@rtems.org http://www.rtems.org/mailman/listinfo/rtems-devel ___ rtems-devel mailing list rtems-devel@rtems.org http://www.rtems.org/mailman/listinfo/rtems-devel
Re: [VirtLayer] Changes to the interrupt handling in POK
That's it! I didn't get that dual handler design from the implementation on your blog. You should also transition to user mode before calling the RTEMS handler. On Sun, Aug 18, 2013 at 7:33 PM, Philipp Eppelt philipp.epp...@mailbox.tu-dresden.de wrote: Hi, why only talking about it and not implementing it right away? ;) POK administers now one meta_handler per HW IRQ line. Each meta_handler consists of a vector number and table of the size POK_CONFIG_NB_PARTITION + 1. So the meta_handler can administer one handler per partition and one handler for the kernel. The clock tick, for instance, needs a kernel handler and can now be forwarded to a partition. The kernel handler is invoked on each interrupt before the partition handler is called. So there is no overruling of the POK kernel. As far as I tested it, the partition handler runs side by side with the kernel handler with a constant offset, as it is registered later in time. Cheers, Philipp On 08/18/2013 08:01 PM, Cláudio Silva wrote: Ok. That clarifies the next steps. You are correct; POK should be tested first without RTEMS integration. My question was more about the separation between the RTEMS part of the interrupt handler and the POK part. If RTEMS registers a handler it will monopolize it; i.e. there is no possibility to run a POK handler and then an RTEMS handler. This is required to maintain segregation and to prevent RTEMS from interfering with POK. If RTEMS registers a handler for a vector already user by POK (e.g. clock tick interrupt), POK cannot be overruled and be dependent on RTEMS to run its own code. But for a first iteration it's OK to ignore this. Cláudio On Sun, Aug 18, 2013 at 3:03 PM, Philipp Eppelt philipp.epp...@mailbox.tu-dresden.de mailto:philipp.epp...@mailbox.tu-dresden.de wrote: Hi, currently I have no RTEMS partition running on POK, as these changes must first be validated with pure POK partitions. Also I haven't written the code to get a interrupt handler through the partition abstraction. CLOCK_HANDLER is a macro which is incrementing the tick counter and invoking the scheduler in the POK kernel. To register IRQ handlers from a partition, e.g. an RTEMS partition, I need to create a new syscall first, to get the callback function into the meta_handler list. So when RTEMS tries to attach to an interrupt, the virtualization layer implementation of POK is invoked and registers a predefined handler via a syscall with the meta_handler for this irq number. When the interrupt occurs the registered handler is invoked with the vector number as an argument. Then the handler invokes C_dispatch_isr() in RTEMS passing the vector number along. That's the plan, for the next week. Cheers, Philipp On 08/18/2013 02:45 PM, Cláudio Silva wrote: Good job Philipp. It seems a nice first iteration for interrupt virtualization. Did you get to try it? If I got it right, the POK part of the interrupt handler (CLOCK_HANDLER ??) is running after the virtualized RTEMS handler? Regards, Cláudio On Sun, Aug 18, 2013 at 12:18 AM, Philipp Eppelt philipp.epp...@mailbox.tu-dresden.de mailto:philipp.epp...@mailbox.tu-dresden.de mailto:philipp.epp...@mailbox.tu-dresden.de mailto:philipp.epp...@mailbox.tu-dresden.de wrote: Hi, lately I worked a lot to extend POK with necessary features to accommodate an RTEMS partition. Since hello world works, I need a way to pass clock ticks to the RTEMS partition running on POK. Therefore the interrupt handling needs to support custom handlers. If everything works out, I can enable the virtualization layer to register an interrupt vector and if it occurs RTEMS is notified by invoking C_dispatch_isr(). Blog: http://phipse.github.io/rtems/blog/2013/08/17/pok-hardware-interrupt-handling/ Source: https://github.com/phipse/pok Cheers, Philipp ___ rtems-devel mailing list rtems-devel@rtems.org mailto:rtems-devel@rtems.org mailto:rtems-devel@rtems.org mailto:rtems-devel@rtems.org http://www.rtems.org/mailman/listinfo/rtems-devel ___ rtems-devel mailing list rtems-devel@rtems.org http://www.rtems.org/mailman/listinfo/rtems-devel