"David Lynch" <[email protected]> writes: > The question with this is whether there is enough time when less than > the ideal number of sub-processes are active to make the added > complexity and memory usage worthwhile. I've been experimenting on my > system with a change where the largest zoom levels are assigned to the > first (n-1) children and all remaining zoom levels to the last child. > On my two-core machine, rendering tiles for z17 takes about as long > (actually slightly longer) as rendering z12-z16 combined, so there is > much less time with a core sitting idle than with the stable client's > method.
This only helps for two cores. If you try to get the performance of four cores you are out of luck. I once considered changing the forking to start a new fork for the rendering of every tileset. The main program would do all the server communication and start as many render forks as specified in the config. This would work as follows (with fork=2): - get a job and OSM data - start render fork - get another job and OSM data - start render fork - get another job and OSM data - wait for one of the forks to finish - start render fork - upload finished tileset - get another job and OSM data - wait ... This would not improve turnaround of jobs but would probably have better overall throughput than the current forking stuff. And it should scale a lot better. I did not go for this approach, yet because I wanted to see what René's threading does. Matthias _______________________________________________ Tilesathome mailing list [email protected] http://lists.openstreetmap.org/listinfo/tilesathome
