Hello Tobias,
1) When creating a new thread via syscall a stack has to be passed with it. Assuming from the code in ... the spartan kernel is expecting a stacksize of 8k Bytes. By passing the lowest address of the stack via syscall to the kernel and then adding those 8k Bytes the stacksize seems to be hard coded to be 8k Bytes big. But what's about other stacksizes? The problem with this is, that Genode is using variable stack sizes from 4k up to 16k Bytes.
The constant stack size is a limitation of the current implementation. Someone should have a look on it in the near future.
The basic thread support code is quite ancient and probably nobody have had any strong reasons to give it a second look since it was originally implemented. Thanks for giving us the motivation to revisit it now. It should not be too hard to implement it properly on most platforms (except perhaps IA-64 which I am always little afraid of :)).
BTW: There was an entire GSoC project proposal dedicated to stack management improvements [1].
I suggest that you simply use the constant stack size as a temporary solution. You can always increase the constant size to 16 KB if you need it by updating the STACK_FRAMES macro in the kernel.
[1] http://trac.helenos.org/ticket/382
2) The ipc mechanisms of HelenOS seem to be seems to be focusing on the communication between different tasks. Therefore every _task_ has exactly one answerbox which can be addressed. But what about the communication between different threads (kernel primitives) existing in one single task?
You have to understand that the kernel IPC mechanism is designed to send data between isolated address spaces. A task is a container for an address space and threads, therefore the endpoints of the kernel IPC are the tasks.
If you need to dispatch IPC messages to some finely-grained entities inside the destination task (be it threads, fibrils as is the case in HelenOS async framework or anything else), you have to do exactly that: Implement a dispatcher inside the user space task which will dispatch the individual messages to the final entities. This is something that can be safely and easily implemented in user space and therefore it is not implemented in the kernel (respecting the microkernel design principles).
Note that the HelenOS/SPARTAN IPC mechanism is connection-oriented, so you can track the individual source and destination entities by keeping track of the connections.
The written documentation [2] is rather old and maybe even outdated.
The basic principles are still the same, although the terminology has changed a bit (read "pseudo threads" as "fibrils") and some of the new special IPC methods (such as IPC_M_CONNECTION_CLONE) are not documented in this PDF.
In my opinion it is not separating clearly enough what can be done with kernel primitives and what is only available in in the userspace.
Sections 8 to 8.1 (including) describe the kernel primitives. Section 8.2 describes very briefly the async framework in user space. Again, the implementation of the async framework has been improved in many ways since 2005, but the basic principles are still the same.
So could you please point out how single threads inside a task can be communicate with each other?
There is no reason for the threads _of_a_single_task_ to communicate using IPC. You should use standard shared data structures for that. The IPC should be used to communicate between (threads of) distinct tasks.
(The idea behind this is to create the next test where arguments for a printf is being passed from one thread to another in one single task as initial ipc test)
I suggest that you modify your plan and implement two tasks passing the arguments between each other. For two threads in a single task this really does not make much sense -- it should be doable in theory, but it is plainly unnecessary.
Also note that you cannot create an IPC connection just between arbitrary tasks. You can only create new IPC connections over already established connections, either by forwarding or by creating callbacks. Therefore for the IPC to work you need the initial naming service which all tasks are automatically connected to by the kernel. In your test case one of the two communicating tasks will have to be the naming service.
Have a look at the HelenOS Naming Service (uspace/srv/ns) to get the idea. The HelenOS Naming Service is implemented only using the basic kernel IPC interface (not the async framework) and it is single-threaded, therefore it should be easy to understand.
If you need any additional help, do not hesitate to ask. We acknowledge that the documentation of the IPC is rather old, but the code should be rather easy to follow.
M.D. _______________________________________________ HelenOS-devel mailing list [email protected] http://lists.modry.cz/cgi-bin/listinfo/helenos-devel
