On Oct 4, 2017, at 9:44 AM, Joe Groff <jgr...@apple.com> wrote:
>> I disagree.  The semantics being proposed perfectly overlap with the 
>> transitional plan for overlays (which matters for the next few years), but 
>> they are the wrong default for anything other than overlays and the wrong 
>> thing for long term API evolution over the next 20 years.
> 
> I disagree with this. 'inline' functions in C and C++ have to be backed by a 
> symbol in the binary in order to guarantee function pointer identity, but we 
> don't have that constraint. Without that constraint, there's almost no way 
> that having a fallback definition in the binary is better:
> 
> - It becomes an ABI compatibility liability that has to be preserved forever. 

This seems like a marginal win at all.  Saying that you want to publish a 
symbol as public API but not have it be ABI is a bit odd.  What is the usecase 
(other than the Swift 3/4/5 transition period)?

> - It increases binary size for a function that's rarely used, and which is 
> often much larger as an outlined generic function than the simple operation 
> that can be inlined into client code. Inlining makes the most sense when the 
> inlined operation is smaller than a function call, so in many cases the net 
> dylib + executable size would increase.

I can see this argument, but you’re basically saying that a sufficiently smart 
programmer can optimize code size based on (near) perfect knowledge of the 
symbol and all clients.  I don’t think this is realistic for a number of 
reasons.  In general, an API vendor has no way to know:

1) how many clients it will have, potentially in multiple modules that get 
linked into a single app.
2) on which types a generic function will be used with.
3) what the code size tradeoffs ARE, e.g. if you have a large function that 
doesn’t use the archetype much, there is low bloat.

Furthermore, we have evidence from the C++ community that people are very eager 
to mark lots of things inlinable regardless of the cost of doing so.  Swift may 
end up being different, but programmers still have no general way to reason 
about code size given their declaration and without perfect knowledge of the 
clients.

The code of the approach I’m advocating is one *single* implementation gets 
generated in the module that defines the decl.  This can lead the N 
instantiations of exactly the same unspecialized code (consider the currying 
and other cases) in N different modules that end up in an app.  This seems like 
the right tradeoff.

> - It increases the uncertainty of the behavior client code sees. If an 
> inlinable function must always be emitted in the client, then client code 
> *always* gets the current definition. If an inlinable function calls into the 
> dylib when the compiler chooses not to inline it, then you may get the 
> current definition, or you may get an older definition from any published 
> version of the dylib. Ideally these all behave the same if the function is 
> inlinable, but quirks are going to be inevitable.

You’re saying that “if an API author incorrectly changes the behavior of their 
inlinable function” that your approach papers over the bug a little bit better. 
 I don’t see this as something that is important to design around.  Not least 
of which because it will produce other inconsistencies: what if a binary module 
A is built against the old version of that inlinable function and you app 
builds against a newer version?  Then you have the two inconsistent versions in 
your app again.

More generally though, an API vendor who does this has broken the 
fragile/inlinable contract, and they therefore invoked undefined behavior - 
c'est la vie.

-Chris

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to