> On Nov 30, 2016, at 9:40 PM, Jiho Choi <jray...@gmail.com> wrote:
> 
> Thanks for providing the pointer.
> Do you have any preliminary result or goal (e.g. the replacement ratio) of 
> the optimization? 
> Is it going to replace all ARC operations with non-atomic ones for 
> single-threaded applications?

In the ideal world, it would be nice to replace all ARC operations with 
non-atomic ones for single-threaded applications. 

But in reality, it is way more difficult as it may seem at the first glance. 

If this needs to happen without any hints from the developer, just by means of 
a static analysis of a program, then it is rather difficult. The main problem 
is that the compiler needs to reason whether a given reference may escape to 
another thread. For references created inside a function, we have rather good 
chances to figure out if a reference escapes the thread. But if the origin 
(i.e. how it was created or if it has escaped before) of a given reference is 
unknown, which is a typical case with function parameters or references inside 
class instances, then the compiler has to assume that any such reference has 
escaped its original thread and thus it needs to use atomic ARC-operations. 
Some sort of a global, whole-module/whole-program analysis may help here 
somewhat. But even if we would introduce such kind of analysis, it is likely to 
remain a problem for dynamic libraries and frameworks, because they don’t know 
and cannot reason which parameters required by their exposed APIs escaped in 
the user-code.

Alternatively , a developer could provide a hint and assure that compiler that 
the app is single-threaded. One simple possibility could be to have a special 
-single-threaded compiler option, which would basically claim that the app 
being developed is single threaded and thus there is no need for performing the 
atomic ARC operations. In this case, all ARC operations would be marked 
non-atomic by default in the code emitted from the user-code. The problem with 
this option could be that if a user app starts multiple threads directly or 
indirectly (e.g. it calls a library API, which starts a new thread), even 
though the option claimed the app would not do it,  and some references will be 
shared between threads, then the execution of such an app may become 
unpredictable and end up with hard to find crashes. Mixing object files and 
libraries where a subset is compiled with this option and another part without 
is another receipt for a disaster. So, one would need to be extremely cautious 
when using this option. 

There could be also something in between, where one would use special 
attributes indicating something related to thread-safety of a given 
reference/type/function/etc. These hints could help a compiler to reason about 
references and check if they may escape to a different thread.

-Roman

> 
> On Wed, Nov 30, 2016 at 8:50 PM Roman Levenstein <rlevenst...@apple.com 
> <mailto:rlevenst...@apple.com>> wrote:
>> On Nov 30, 2016, at 6:25 PM, Jiho Choi via swift-dev <swift-dev@swift.org 
>> <mailto:swift-dev@swift.org>> wrote:
>> 
>> Thanks for clarifications.  I have a couple of follow-up questions.
>> 
>> 1. Could you please provide more information (e.g. source code location) 
>> about the optimization applying non-atomic reference counting?  What's the 
>> scope of the optimization?  Is it method-based?
> 
> The optimization itself is not merged yet. But all the required machinery, 
> e.g. non-atomic versions of the ARC operations, special non-atomic flag on 
> SIL instructions, etc is in place already.
> 
> As for the prototype implementation, you can find it here, on my local branch:
> https://github.com/swiftix/swift/blob/30409865ff49a4268363cd359f82f29c9a90cce8/lib/SILOptimizer/Transforms/NonAtomicRC.cpp
>  
> <https://github.com/swiftix/swift/blob/30409865ff49a4268363cd359f82f29c9a90cce8/lib/SILOptimizer/Transforms/NonAtomicRC.cpp>
> 
>> 
>> 2. Looking at the source code, I assume Swift implements immediate reference 
>> counting (i.e. immediate reclamation of dead objects)  without requiring 
>> explicit garbage collection phase for techniques, such as deferred reference 
>> counting or coalescing multiple updates.  Is it right?  If so, is there any 
>> plan to implement such techniques?
> 
> Yes. It is a correct understanding. 
> Different extensions like deferred reference counting were discussed, but 
> there are no short-term plans to implement it anytime soon.
> 
> -Roman
> 
> 
>> 
>> On Wed, Nov 30, 2016 at 11:41 AM John McCall <rjmcc...@apple.com 
>> <mailto:rjmcc...@apple.com>> wrote:
>>> On Nov 30, 2016, at 8:33 AM, Jiho Choi via swift-dev <swift-dev@swift.org 
>>> <mailto:swift-dev@swift.org>> wrote:
>>> Hi,
>>> 
>>> I am new to Swift, and I have several questions about how ARC works in 
>>> Swift.
>>> 
>>> 1. I read from one of the previous discussions in the swift-evolution list 
>>> (https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20160208/009422.html
>>>  
>>> <https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20160208/009422.html>)
>>>  that ARC operations are currently not atomic as Swift has no memory model 
>>> and concurrency model.  Does it mean that the compiler generates non-atomic 
>>> instructions for updating reference counts (e.g. using incrementNonAtomic() 
>>> instead of increment() in RefCount.h)?
>> 
>> No.  We have the ability to do non-atomic reference counting as an 
>> optimization, but we only trigger it when we can prove that an object hasn't 
>> escaped yet.  Therefore, at the user level, retain counts are atomic.
>> 
>> Swift ARC is non-atomic in the sense that a read/write or write/write race 
>> on an individual property/variable/whatever has undefined behavior and can 
>> lead to crashes or leaks.  This differs from Objective-C ARC only in that a 
>> (synthesized) atomic strong or weak property in Objective-C does promise 
>> correctness even in the face of race conditions.  But this guarantee is not 
>> worth much in practice because a failure to adequately synchronize accesses 
>> to a class's instance variables is likely to have all sorts of other 
>> unpleasant effects, and it is quite expensive, so we decided not to make it 
>> in Swift.
>> 
>>> 2. If not, when does it use non-atomic ARC operations? Is there an 
>>> optimization pass to recognize local objects?
>>> 
>>> 3. Without the concurrency model in the language, if not using GCD (e.g. 
>>> all Swift benchmark applications), I assume Swift applications are 
>>> single-threaded.  Then, I think we can safely use non-atomic ARC 
>>> operations.  Am I right?
>> 
>> When we say that we don't have a concurrency model, we mean that (1) we 
>> aren't providing a more complete language solution than the options 
>> available to C programmers and (2) like C pre-C11/C++11, we have not yet 
>> formalized a memory model for concurrency that provides formal guarantees 
>> about what accesses are guaranteed to not conflict if they do race.  (For 
>> example, we are unlikely to guarantee that accesses to different properties 
>> of a struct can occur in parallel, but we may choose to make that guarantee 
>> for different properties of a class.)
>> 
>>> 4. Lastly, is there a way to measure the overhead of ARC (e.g. a compiler 
>>> flag to disable ARC)?
>> 
>> No, because ARC is generally necessary for correctness.
>> 
>> John.
> 
>> _______________________________________________
>> swift-dev mailing list
>> swift-dev@swift.org <mailto:swift-dev@swift.org>
>> https://lists.swift.org/mailman/listinfo/swift-dev 
>> <https://lists.swift.org/mailman/listinfo/swift-dev>
> 

_______________________________________________
swift-dev mailing list
swift-dev@swift.org
https://lists.swift.org/mailman/listinfo/swift-dev

Reply via email to