Fun stuff. > On Jan 27, 2019, at 2:46 PM, Michael Jones <michael.jo...@gmail.com> wrote: > > I often write “solve hard math problem” codes and miss my Google days of > lusting for a data center in bring up with 20K idle CPUs. My home computers > always run at 100% and I have an AWS large instance as well. Looking forward > to AMD’s 2x64 Rome. > > On Sun, Jan 27, 2019 at 12:38 PM robert engels <reng...@ix.netcom.com > <mailto:reng...@ix.netcom.com>> wrote: > Even with 64 cores your process takes 3hrs… unless they are all external > requests - so essentially unlimited cores. > > >> On Jan 27, 2019, at 2:32 PM, Michael Jones <michael.jo...@gmail.com >> <mailto:michael.jo...@gmail.com>> wrote: >> >> Glad you saw it. Lots of ways to do it but small-seeming details shape the >> approach: >> >> Are the tasks of similar effort? If yes, good, if not it is VERY desirable >> to start the hard ones first and on different workers. >> >> Do you know how many tasks? If you do not—if you only know when you’re done >> with new tasks—then that means you need to signal completion. >> >> Can you wait for the last result before sending the first output? Could mean >> a big stall and is very different than the single-thread case, but, it >> allows sorting and easy load balancing. >> >> Might you want to quit early and abandon processing? This is not so natural >> to the mechanisms so requires finesse in your code (as suggested by various >> debates about the context idea). >> >> My snippet is one path through this decision matrix. >> >> Also, it uses an outer ask/answer channel pair for uniformity between serial >> and parallel modes. This is fine for my case (ten thousand minute long >> tasks) with a max rate of channel sending on my laptop of about 3M >> sends/sec. overhead here is about zero but if the tasks were itsy-bitsy then >> the overhead would matter. So you’d want to batch them—or restructure. >> >> On Sun, Jan 27, 2019 at 3:51 AM Tom Payne <twpa...@gmail.com >> <mailto:twpa...@gmail.com>> wrote: >> Yes, I did, thank you! My reply was to the previous message (robert engels' >> post about it being "straightforward" but not providing code) and I think we >> just both hit send at about the same time. >> >> On Sat, 26 Jan 2019 at 02:52, Michael Jones <michael.jo...@gmail.com >> <mailto:michael.jo...@gmail.com>> wrote: >> Did you notice that I sent you the complete code above? >> >> On Fri, Jan 25, 2019 at 2:48 PM <twpa...@gmail.com >> <mailto:twpa...@gmail.com>> wrote: >> For what it's worth >> http://www.golangpatterns.info/concurrency/parallel-for-loop >> <http://www.golangpatterns.info/concurrency/parallel-for-loop> >> implements an order-preserving parallel map, but does not limit the number >> of workers. >> >> In my case, I want to limit the number of workers because I'm making a lot >> of system calls and don't want to overload the kernel. runtime.NumCPU() >> seems like a reasonable limit. >> >> >> >> On Friday, January 25, 2019 at 8:04:31 PM UTC+1, twp...@gmail.com >> <mailto:twp...@gmail.com> wrote: >> Hi, >> >> I have a number of slow tasks that I want to run concurrently across >> runtime.NumCPU() workers in a single process. The tasks have a specific >> input order, but they are completely independent of each other and can >> execute in any order. I would like to print the output of each task in the >> same order as the input order of tasks. >> >> This can be implemented by including each task's index in the input order as >> it is distributed via a channel to the workers, and the final collection of >> results assembled using these task indexes before the results are printed. >> >> Assumptions: >> - Small number of tasks (~10,000 max), i.e. this easily fits in memory. >> - Single Go process, i.e. I don't want/need a distributed system. >> >> This feels like it should be common problem and there's probably either a >> library or a standard Go pattern out there which can do it. My web search >> skills didn't find such a library though. Do you know of one? >> >> Cheers, >> Tom >> >> >> Background info to avoid the XY problem <http://xyproblem.info/>: this is to >> make chezmoi <https://github.com/twpayne/chezmoi> run faster. I want to run >> the doctor checks >> <https://github.com/twpayne/chezmoi/blob/ed27b49f9ca4cd3662e6a59908dee24b0d295b79/cmd/doctor.go#L102-L163> >> (basically os.Exec'ing a whole load of binaries to get their versions) >> concurrently in the short term. In the long term I want to make chezmoi's >> apply concurrent, so it runs faster too. In the first case, the order >> requirement is because I want all users to see the output in the same order >> so that it's easy to compare. In the second case, the order requirement >> comes because I need to ensure that parent directories are in the correct >> state before checking their children. >> >> -- >> You received this message because you are subscribed to the Google Groups >> "golang-nuts" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to golang-nuts+unsubscr...@googlegroups.com >> <mailto:golang-nuts+unsubscr...@googlegroups.com>. >> For more options, visit https://groups.google.com/d/optout >> <https://groups.google.com/d/optout>. >> >> >> -- >> Michael T. Jones >> michael.jo...@gmail.com <mailto:michael.jo...@gmail.com>-- >> Michael T. Jones >> michael.jo...@gmail.com <mailto:michael.jo...@gmail.com> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "golang-nuts" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to golang-nuts+unsubscr...@googlegroups.com >> <mailto:golang-nuts+unsubscr...@googlegroups.com>. >> For more options, visit https://groups.google.com/d/optout >> <https://groups.google.com/d/optout>. > > -- > Michael T. Jones > michael.jo...@gmail.com <mailto:michael.jo...@gmail.com>
-- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.