> On Sep 2, 2017, at 4:05 PM, David Zarzycki via swift-evolution > <[email protected]> wrote: > >> On Sep 2, 2017, at 14:15, Chris Lattner via swift-evolution >> <[email protected] <mailto:[email protected]>> wrote: >> >> My understanding is that GCD doesn’t currently scale to 1M concurrent queues >> / tasks. > > Hi Chris! > > [As a preface, I’ve only read a few of these concurrency related emails on > swift-evolution, so please forgive me if I missed something.] > > When it comes to GCD scalability, the short answer is that millions of of > tiny heap allocations are cheap, be they queues or closures. And GCD has > fairly linear performance so long as the millions of closures/queues are > non-blocking. > > The real world is far messier though. In practice, real world code blocks all > of the time. In the case of GCD tasks, this is often tolerable for most apps, > because their CPU usage is bursty and any accidental “thread explosion” that > is created is super temporary. That being said, programs that create > thousands of queues/closures that block on I/O will naturally get thousands > of threads. GCD is efficient but not magic. > > As an aside, there are things that future versions of GCD could do to > minimize the “thread explosion” problem. For example, if GCD interposed the > system call layer, it would gain visibility into *why* threads are stalled > and therefore GCD could 1) be more conservative about when to fire up more > worker threads and 2) defer resuming threads that are at “safe” stopping > points if all of the CPUs are busy. > > That being done though, the complaining would just shift. Instead of an > “explosion of threads”, people would complain about an “explosion of stacks" > that consume memory and address space. While I and others have argued in the > past that solving this means that frameworks must embrace callback API design > patterns, I personally am no longer of this opinion. As I see it, I don’t > think the complexity (and bugs) of heavy async/callback/coroutine designs are > worth the memory savings. Said differently, the stack is simple and > efficient. Why fight it? > > I think the real problem is that programmers cannot pretend that resources > are infinite. For example, if one implements a photo library browsing app, it > would be naive to try and load every image at launch (async or otherwise). > That just won’t scale and that isn’t the operating system's fault.
Problems like thread explosion can be solved using higher-level constructs, though. For example, (NS)OperationQueue has a .maxConcurrentOperationCount property. If you make a global OperationQueue, set the maximum to whatever you want it to be, and run all your “primitive” operations through the queue, you can manage the thread count rather effectively. I have a few custom Operation subclasses that easily wrap arbitrary asynchronous operations as Operation objects; once the new async/await API comes out, I plan to adapt my subclass to support it, and I’d be happy to submit the code to swift-evolution if people are interested. Charles
_______________________________________________ swift-evolution mailing list [email protected] https://lists.swift.org/mailman/listinfo/swift-evolution
