We considered using memory mapped files for the checkpoint state information in SETI@home, but decided that it was virtually impossible to guarantee synchronization on exit. And, of course, the problem with using a ram disk for checkpoints that the checkpoint data disappears if power is lost.
But if you want checkpoints to not occur, there is a preference for that... "Tasks checkpoint to disk at most every: 60 seconds" You can set it to 8 hours. But you'll also want to choose "suspend to memory". Of course the output of the app (which depending on the app might be more frequent than checkpoints) will still spin up the disks. If you really want to operate from a ram disk, just copy your project directory to /dev/shm and link it into /var/lib/boinc/projects. You'll still need to back it up occasionally for those times when the power goes out. On Mon, Jun 17, 2013 at 1:04 AM, "Steffen Möller" <[email protected]>wrote: > > > > Gesendet: Montag, 17. Juni 2013 um 08:46 Uhr > > Von: "David Anderson" <[email protected]> > > > Manager/client communication uses TCP - no disk I/O. > > So the possible sources of large disk I/O are: > > > > 1) checkpoint or output file generation by apps > > 2) writing of client_state.xml (or maybe other files) > > by the BOINC client > > The checkpointing and the client_state.xml for my 24/7 > machines I would very much like to just switch off or > have updated only every hour or have their update initiated > only upon request by the boinc client. > > > 3) client/app communication via memory-mapped files. > > According to my calculations, > > this should generate less than 1 MB/Hour per task. > > Note: we use memory-mapped files (mmap) instead of > > pure shared memory segments (shmget) > > because Mac OS X has a system-wide limit of 32 shared-mem segments, > > and some Linux systems have limits. > > Maybe there's a way to configure memory-mapped files > > to not write back to the disk file, but I can't find one. > > It should be the shm_open with mmap together, i.e. just substituting the > call to open that BOINC currently performs with shm_open. > From > http://pubs.opengroup.org/onlinepubs/009695399/functions/shm_open.html > > fd = shm_open("/myregion", O_CREAT | O_RDWR, S_IRUSR | S_IWUSR); > rptr = mmap(NULL, sizeof(struct region), > PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); > > Should be disk-free, with "/myregion" only having a symbolic meaning. > Here > > http://stackoverflow.com/questions/4836863/shared-memory-or-mmap-linux-c-c-ipc > I also found a reference to an interesting IPv6 to localhost with > multicast idea, > No idea if that is applicable for BOINC. The trend is more towards > open_shm+mmap. > > Many thanks for the swift reply > > Steffen > > > > Can someone investigate which of these is the source of the large I/O? > > > > -- David > > > > On 16-Jun-2013 11:03 AM, "Steffen Möller" wrote: > > > Hello, > > > > > > iostat gives rather drastic values for the amount of data that is > written to disk by > > > BOINC and/or its applications. Some good fellow once crafted a but > report about it > > > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=636075 > > > and my own reply was not overly helpful at the time. To reduce the IO > overhead is > > > certainly helpful. Reduced latency is one thing, but with an outreach > to the mobile > > > world there are many SSD / flash device users who care so much about > write endurance. > > > > > > It kept bugging me, and quite a while back it came to me that this may > not be > > > because of the application's work but instead be mere overhead of a > file based > > > communication between the app and the boinc-client / boinc-manager. I > just never > > > got around chasing this up, also having read so often about > communication done via > > > shared memory, which should not need much IO, and if so, then not with > disk devices > > > but something like tmpfs. "Let's see how it is done", I just thought. > > > > > > From what I found, there are two functions to create shared memory in > BOINC, both in > > > lib/shmem.cpp. One is through > > > create_shmem > > > which internally uses shmget and should be just fine. The other is > > > create_shmem_mmap > > > which internally uses mmap - which can be memory-only or not. The > early mmap allowed the > > > memory only (anonymous) sharing only for forks of the same process. > For anything else > > > one needs to pass a regular file descriptor to communicate to mmap > from/to where to get/write > > > the data. Newer years brought the function shm_open ( > http://linux.die.net/man/3/shm_open), > > > which creates an entry in /dev/shm if I got this right, and allows > forwarding this fd for > > > a complete in memory-only communication with a (pseudo-)file-mapping > mmap. > > > > > > In today's BOINC code, mmap is called with a file descriptor created > with "open" (no > > > typo, it is from boinc/lib/std_fixes.h), which itself calles open64 as > defined in > > > /usr/include/fcntl.h (?) and expects to create a regular file. > > > > > > I admit not to know too much about the consequences of a memory-only > communication for BOINC. > > > It is not more than a couple of megs every second indicating the > status of the applications, > > > right? So not too much memory would be taken. With checkpointing > performed independently, > > > this could then work just fine. Is there something I did not see? I am > otherwise much tempted > > > to address it and see how far I get. Some extra thinking is required > to maintain a > > > compatibility between the BOINC client and older statically linked > applications. With Debian > > > we have the applications dynamically linking to the same BOINC library > as the BOINC client. > > > Promising enough? Please direct me. > > > > > > Cheers, > > > > > > Steffen > > > _______________________________________________ > > > boinc_dev mailing list > > > [email protected] > > > http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev > > > To unsubscribe, visit the above URL and > > > (near bottom of page) enter your email address. > > > > > _______________________________________________ > > boinc_dev mailing list > > [email protected] > > http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev > > To unsubscribe, visit the above URL and > > (near bottom of page) enter your email address. > > > _______________________________________________ > boinc_dev mailing list > [email protected] > http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev > To unsubscribe, visit the above URL and > (near bottom of page) enter your email address. > > _______________________________________________ boinc_dev mailing list [email protected] http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev To unsubscribe, visit the above URL and (near bottom of page) enter your email address.
