I love experimenting with processes like this. Create as many operation
processes as you have cores on your box and run a few tests to see how quickly
the operations complete. Try it with 2x the number of cores, 1x the number of
cores, 1/2 x the number of cores and the number of cores + 1
Instead of using NSOperationQueue, I would use GCD to handle the tasks. Create
a new Concurrent queue (dispatch_queue_create(DISPATCH_QUEUE_CONCURRENT)), then
enqueue the individual items to the queue for processing (dispatch_async(),
using the queue created above). Everything can be handled in
You mentioned creating and managing threads on your own, but that’s what
NSOperationQueue —and the lower-level DispatchQueue— does. It also will be more
efficient with thread management since it has an intimate understanding of the
capabilities of the processor, etc., and will work to do the
That is a good idea. Thanks a lot!
Maybe, I can turn this into more fine-grained, dynamic load balancing (or
latency hiding), as follows:
create a number of threads (workers);
as soon as a worker is finished with their "current" image, it gets the next
one (a piece of work) out of the list,
One way to speed it up is to do as much work as possible in parallel. One way
—and this is just off the top of my head— is:
1. Create a NSOperationQueue, and add a single operation on that queue to
manage the entire process. (This is because some parts of the process are
synchronous and might