This is somewhat related to the topic or at least the spirit of why the statement was made.
Considering that even when some tasks are capable of being split up and executed in parallel it is not always a performance win to do so vs processing sequentially I wonder how valid the current push for highly parallel processing and very high numbers of cores. There is already a fair amount of code which has to be executed sequentially. If a significant enough portion of the code left which can be run in parallel turns out in practice to consist of data sets too small or operations too simple to be worth running in parallel then it really puts a limit on how much parallel processing power can be made use of. It may be that in the future ways are found to reduce the overhead involved in parallel processing such that more code can be effectively run in parallel. Otherwise it may be that the reality doesn't live up to the hype. In the very least I might hope that today's most computationally expensive tasks may as lest for the most part be able to benefit. Rendering or encoding is a fair example. Games are trickier usually because trying to keep everything synchronised becomes a pain. I imagine some games might more easily benefit. I can remember Masters of Orion III with a large enough galaxy when a turn was being calculated you were probably best to go make a drink or go for a little walk and come back. I suspect it was pretty single threaded and made at a time when multi core was not main stream but conceptually a lot of the base calculations could have been run in parallel and just the interactions where order of events was important would have to be calculated in sequence. Some decision on sequence must have had to be made by the game anyway such as whether planetary resources are calculated before or after taking into account if a planet is attacked (as it makes a difference if resource producing areas were damaged or destroyed in the attack). I hope you don't mind the use of this example but it was a game which made me think about classic computer science problems of multiplicative or exponential increases in required running time as the data set increased. On Jan 7, 1:24 pm, Ricky Clarkson <[email protected]> wrote: > It's not outside the ability of a JIT to infer, which brings it back > to being relevant to Java. Incidentally, Scala's in the same position > as Java in that respect; it has no knowledge of whether methods have > side effects. > > > > > all you need to understand is whether your loop body lends itself to > > parallelization since this is obviously outside the ability of the > > compiler* to magically infer. > > > *Here we're talking mainstream imperative Java/C#, not Haskell/ML/ > > Clojure/Scala or other experimental languages, AFAIK this is still > > called The Java Posse. > > > -- > > You received this message because you are subscribed to the Google Groups > > "The Java Posse" group. > > To post to this group, send email to [email protected]. > > To unsubscribe from this group, send email to > > [email protected]. > > For more options, visit this group > > athttp://groups.google.com/group/javaposse?hl=en. -- You received this message because you are subscribed to the Google Groups "The Java Posse" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/javaposse?hl=en.
