Well, I understand that it seems like there is some problem with UIntN() initialization, but can't find any simple code that will demonstrate this..

All below works as expected:

var x1: Int32 = 0
var x2 = Int32(0)

print(x1.dynamicType, x2.dynamicType) // Int32 Int32

// integer overflows when converted from 'Int' to 'UInt16'
//var x = UInt16(100_000)
//var x = UInt16(-10)

// negative integer cannot be converted to unsigned type 'UInt64'
// var x = UInt64(-1)

So, what code will produce some unexpected behavior / error at runtime?

On 03.06.2016 0:25, John McCall wrote:
On Jun 2, 2016, at 1:56 PM, Vladimir.S <sva...@gmail.com> wrote:
Often
this leads to static ambiguities or, worse, causes the literal to be built
using a default type (such as Int); this may have semantically very
different results which are only caught at runtime.

Seems like I'm very slow today.. Could you present a couple of examples where 
such initialization(like UInt16(7)) can produce some unexpected behavior / 
error at runtime?

UIntN has unlabeled initializers taking all of the standard integer types, 
including itself.  The literal type will therefore get defaulted to Int.  The 
legal range of values for Int may not be a superset of the legal range of 
values for UIntN.  If the literal is in the legal range for an Int but not for 
the target type, this might trap at runtime.  Now, for a built-in integer type 
like UInt16, we will recognize that the coercion always traps and emit an error 
at compile-time, but this generally won't apply to other types.

John.


On 02.06.2016 19:08, John McCall via swift-evolution wrote:
The official way to build a literal of a specific type is to write the
literal in an explicitly-typed context, like so:
   let x: UInt16 = 7
or
   let x = 7 as UInt16

Nonetheless, programmers often try the following:
   UInt16(7)

Unfortunately, this does /not/ attempt to construct the value using the
appropriate literal protocol; it instead performs overload resolution using
the standard rules, i.e. considering only single-argument unlabelled
initializers of a type which conforms to IntegerLiteralConvertible.  Often
this leads to static ambiguities or, worse, causes the literal to be built
using a default type (such as Int); this may have semantically very
different results which are only caught at runtime.

In my opinion, using this initializer-call syntax to build an
explicitly-typed literal is an obvious and natural choice with several
advantages over the "as" syntax.  However, even if you disagree, it's clear
that programmers are going to continue to independently try to use it, so
it's really unfortunate for it to be subtly wrong.

Therefore, I propose that we adopt the following typing rule:

 Given a function call expression of the form A(B) (that is, an
/expr-call/ with a single, unlabelled argument) where B is
an /expr-literal/ or /expr-collection/, if A has type T.Type for some type
T and there is a declared conformance of T to an appropriate literal
protocol for B, then the expression is always resolves as a literal
construction of type T (as if the expression were written "B as A") rather
than as a general initializer call.

Formally, this would be a special form of the argument conversion
constraint, since the type of the expression A may not be immediately known.

Note that, as specified, it is possible to suppress this typing rule by
wrapping the literal in parentheses.  This might seem distasteful; it would
be easy enough to allow the form of B to include extra parentheses.  It's
potentially useful to have a way to suppress this rule and get a normal
construction, but there are several other ways of getting that effect, such
as explicitly typing the literal argument (e.g. writing "A(Int(B))").

A conditional conformance counts as a declared conformance even if the
generic arguments are known to not satisfy the conditional conformance.
This permits the applicability of the rule to be decided without having to
first decide the type arguments, which greatly simplifies the type-checking
problem (and may be necessary for soundness; I didn't explore this in
depth, but it certainly feels like a very nasty sort of dependence).  We
could potentially weaken this for cases where A is a direct type reference
with bound parameters, e.g. Foo<Int>([]) or the same with a typealias, but
I think there's some benefit from having a simpler specification, both for
the implementation and for the explicability of the model.

John.


_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution



_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to