Martin Tarenskeen <[email protected]> writes: > On Sun, 18 May 2014, Graham King wrote: > >> On Sun, 18 May 2014 08:38:09 +0200 (CEST) Martin Tarenskeen wrote: >> > Luckily I don't need this kind of commandline virtuosity. I >> think I >> > can do what I need with one of the first and easiest >> suggestions >> > > convert-ly -e **/*.ly >> >> One more tidbit of painfully-gained experience in this area: If using a >> solution that walks the directory tree, starting convert-ly processes, it's >> important to do it in a way that limits the number of concurrent invocations >> of convert-ly. I've managed to wedge OSX by using a tool that failed to do >> that :( > > Would this be safer? > > ls **/*.ly | while read f; do convert.ly -e "${f}"; done
No. convert-ly -e **/*.ly starts just one process anyway. > Which leads me to another Linux commandline topic: > > Which would be faster, on a machine with a multicore cpu and enough RAM? > > lilypond a.ly b.ly c.ly d.ly > lilypond a.ly & lilypond b.ly & lilypond c.ly & lilypond d.ly > lilypond a.ly && lilypond b.ly && lilypond.c && lilypond d.ly > > Thinking of it, I can test this myself. lilypond -djob-count=4 a.ly b.ly c.ly d.ly A good rule of thumb is one more job than CPU cores. That should be enough to saturate. -- David Kastrup _______________________________________________ lilypond-user mailing list [email protected] https://lists.gnu.org/mailman/listinfo/lilypond-user
