Perhaps a fixed address somewhere below the fixed address used by the kernel when "the system chooses an address" can help solve this problem.
On Fri, Apr 29, 2016 at 1:32 AM, Christophe Milard < [email protected]> wrote: > Thanks, Barry, > > I definitively think that is a good candidate for the arch call. > Even if I am not 100% sure I fully understand the whole picture, I this > text raise a few questions: > > 1) is this plain user space code, or are there related kernel modules > involved? I am guessing user space, but please confirm. > > 2) Do the arenas have to be known at init time? (you wrote "Typically an > initial process will call tmc_shmem_create() to create the file using a > fixed, known file name. Other processes then call tmc_shmem_open() to gain > access to the arena.") > > 3) Are there any constraint on the processes involved (such as processes > should be descendent of a common ancestor?) > > 4)Are processes that attempt to map the arena guaranteed a get the same > address AND guaranteed that the mapping will succeed? (I get worried when > reading "If the user specifies ADDR as zero, the system will choose an > address, and all subsequent users of the arena will automatically* try to > load* it at that address....": This seems to imply that the mapping will > succees only if the virtual address in the process performing the mapping > happens to be free...) > This is exactly the problem I am trying to tackle by reserving the virtual > area address space from the beginning. > > Thanks anyway Barry! Even if there are still questions, I do appreciate > your input! I guess we cann take more discussion on the ARCH call! > > Christophe > > On 28 April 2016 at 17:50, Barry Spinney <[email protected]> wrote: > >> >> One of our Linux developers, Chris Metcalf, several years ago wrote a >> library module for the TileGx chip >> called tmc/shmem.[hc]. While targeted for the TileGx, much of the >> concepts and code are applicable >> to most Linux based platforms. Instead of sending the entire module >> header and source code (for >> which I probably need some managerial approval), I have instead excerpted >> some of the main >> concepts and API's below. >> >> This module does have the property that "reserving" shared virtual >> address space has no cost - >> i.e. no Linux virtual memory page tables will be added or change, nor >> will there be any cost/effect >> on the file system. However once reserved virtual addresses are made >> "active" (e.g. via a call to >> tmc_shmem_alloc) then of course page tables can get added, physical >> memory could get used, >> file system use of the associated backing file can start occurring. >> >> IF this approach is chosen to be used by ODP (or is at least a strong >> contender, then I can >> (after getting the likely manager approval) send in a proper ODP header >> file and maybe an strawman >> implementation as a RFC proposal. >> >> >> >> ******************************************************************************** >> >> Inter-process dynamic shared memory support. >> >> This API provides a convenient method for multiple processes to share >> memory using a persistent filesystem-based arena that is automatically >> mapped at the same, fixed address in all processes sharing the arena. The >> application chooses an address for the arena to be located at and a maximum >> size to which it can grow, and the system manages coordinating access to >> memory mapped from the file among the processes. Since the address is >> fixed, absolute pointer values, etc., may be safely stored into the arena. >> >> As is always true when you use shared memory, you should employ >> appropriate memory fencing to ensure that any modifications are actually >> fully visible before they are used by any other process. >> >> Typically an initial process will call tmc_shmem_create() to create the >> file using a fixed, known file name. Other processes then call >> tmc_shmem_open() to gain access to the arena. The creator should first >> initialize a tmc_alloc_t object to indicate any special attributes of the >> desired memory, such as huge page size, variant cache attributes, etc. If >> huge pages are requested, the tmc_shmem code will automatically open an >> additional file in the appropriate hugetlb file system. The files are >> opened such that they are automatically closed if the process calls exec() >> to start a new executable. >> >> The APIs create a file with permission 0600 (owner-only read/write), but >> the application may invoke fchmod() or fchown() on the underlying file >> descriptors if desired to reset the ownership and permissions. >> >> If the application wishes to create a temporary file name to hold the >> arena, it can use tmc_shmem_create_temp() and pass in a template filename, >> just as is done for mkstemp(). In this case it is typically then necessary >> to communicate the chosen filename out-of-band to the other processes that >> wish to share the arena. >> >> To grow the arena, any process that has the tmc_shmem arena open can call >> tmc_shmem_grow(); this is implemented by extending the underlying file and >> returning a pointer to the newly-allocated chunk of memory at the end of >> the file. Similarly, tmc_shmem_shrink() will truncate the underlying file >> to a shorter length, invalidating any pointers into the truncated portion >> of the file. >> >> If MAP_POPULATE is set in the tmc_alloc_t mmap_flags, the code arranges >> to fault in new pages in a way that ensures that the kernel allocates >> physical memory for the new pages prior to returning from tmc_shmem_grow(). >> If sufficient memory is not available (or if the maximum address space as >> specified in tmc_shmem_create has been exhausted) the routine will fail and >> set errno to ENOMEM. >> >> To handle allocating and freeing shared memory more dynamically, the >> tmc_shmem_alloc() routine returns a tmc_alloc_t pointer that can be used to >> allocate individual pages from the end of the file. The general case of >> unmapping individual pages is not supported (although the special case of >> unmapping the page(s) at the end of the currently-allocated file mapping >> will call tmc_shmem_shrink() to return those pages to the operating system). >> >> .... >> >> When a process is done using an arena, it can call tmc_shmem_close(), or >> it can simply allow the operating system to clean up its use of the arena >> at process exit time. >> >> When access to an arena is no longer needed, it can be unlinked by >> calling tmc_shmem_unlink(). Note that a call to tmc_shmem_unlink() is safe >> while the arena is in use, and the actual (now-unnamed) file and associated >> memory will remain reserved until the last process calls tmc_shmem_close() >> or exits. In fact, this pattern can be useful if a fixed set of processes >> wishes to share some memory, but then forbid any other processes from >> gaining access to the file; in that case after all the tmc_shmem_open() >> calls are complete, one process can call tmc_shmem_unlink() to prohibit any >> other access to the arena. >> >> *********************************************************************** >> tmc_shmem_create( const char *path, const tmc_alloc_t *alloc, void *addr, >> size_t maxsize); >> >> If the user specifies ADDR as zero, the system will choose an address, >> and all subsequent users of the arena will automatically try to load it at >> that address. Note that it is generally better to pick an address >> explicitly to avoid conflicts in other processes or in future runs of the >> same program, by ensuring the arena is well away from all other other >> mappings (for example shared objects, thread stacks, etc). Doing so makes >> it more likely that all processes that wish to load the arena at its >> required address can in fact do so. >> >> *********************************************************************** >> >> Thanx Barry. >> >> > > _______________________________________________ > lng-odp mailing list > [email protected] > https://lists.linaro.org/mailman/listinfo/lng-odp > >
_______________________________________________ lng-odp mailing list [email protected] https://lists.linaro.org/mailman/listinfo/lng-odp
