Follow-up: I tried replacing every instance of
dispatch_sync(_queue, ^{ … });
with
@synchronized(self) { … }
Things got faster again — looks like @synchronized is a few percent slower than
no thread-safety, but _significantly_ faster than dispatch_sync. Which seems to
contradict what the GCD overview says about dispatch queues being faster than
regular locking techniques. I looked at the disassembly, and @synchronized
compiles into calls to objc_sync_enter() and objc_sync_exit(), which in turn
call pthread_mutex_lock and pthread_mutex_unlock; Instruments shows all these
functions consuming nearly zero CPU time during my benchmark. As opposed to
with GCD, where the dispatch-queue runtime calls were most of the hottest code
in the entire run.
I’m not sure what’s going on here. GCD seems to be pretty well respected by
people I trust (I read Mike Ash’s blog posts about it pretty thoroughly while
doing my refactoring, for example) and yet my experience with it so far is that
the overhead is too high to make all the fun queue-and-block-based programming
worthwhile, at least on iOS. :(
—Jens
_______________________________________________
Cocoa-dev mailing list ([email protected])
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com
This email sent to [email protected]