> Like I said, the standard library and compiler conspire to make sure that easy cases like this are caught at compile time, but that would not help non-standard types that conform to IntegerLiteralConvertible.
>
> Also, even for standard types, the syntax only works statically if the literal fits in the range of Int, which may not be a superset of the desired type. For example, UInt64(0x10000000000) would not work on a 32-bit platform. It is diagnosed statically, however.

I believe I understand the problem you described, but I really can't figure out how this problem can produce a unexpected behavior and run-time errors, as was stated in your initial message. So, this is why I was asking for any code that can prove this. The example with UInt64(0x10000000000) on 32bit systems will raise error at _compilation_ time. Could someone provide any code to illustrate the possible problems at run-time? I understand that we need to fix this somehow in any case.

For others, who not really understand the issue(probably I'm not the one, I hope ;-) ) : If we'd have Int128 type, we can't create an instance of it in this form:
let x = Int128(92233720368547758070)
(92233720368547758070 == Int.max * 10)
as '92233720368547758070' literal will be treated always as of Int type.

In more general description, the difference between UIntN(xxx) and yyy as UIntN is that xxx will be treated as Int(so it can't be greater than Int.max for example) then this Int will be sent to UIntN(:Int) initializer and then we have UIntN as a result of call to initializer; yyy will be treated as UIntN literal just by its definition, no calling to any initializer, yyy can be of any value allowed for UIntN.

But again, could someone help with examples when this can produce problems at run-time?

On 03.06.2016 1:12, John McCall wrote:
On Jun 2, 2016, at 2:36 PM, Vladimir.S <sva...@gmail.com> wrote:
Well, I understand that it seems like there is some problem with UIntN() 
initialization, but can't find any simple code that will demonstrate this..

All below works as expected:

var x1: Int32 = 0
var x2 = Int32(0)

print(x1.dynamicType, x2.dynamicType) // Int32 Int32

// integer overflows when converted from 'Int' to 'UInt16'
//var x = UInt16(100_000)
//var x = UInt16(-10)

// negative integer cannot be converted to unsigned type 'UInt64'
// var x = UInt64(-1)

So, what code will produce some unexpected behavior / error at runtime?

Like I said, the standard library and compiler conspire to make sure that easy 
cases like this are caught at compile time, but that would not help 
non-standard types that conform to IntegerLiteralConvertible.

Also, even for standard types, the syntax only works statically if the literal 
fits in the range of Int, which may not be a superset of the desired type.  For 
example, UInt64(0x10000000000) would not work on a 32-bit platform.  It is 
diagnosed statically, however.

John.


On 03.06.2016 0:25, John McCall wrote:
On Jun 2, 2016, at 1:56 PM, Vladimir.S <sva...@gmail.com> wrote:
Often
this leads to static ambiguities or, worse, causes the literal to be built
using a default type (such as Int); this may have semantically very
different results which are only caught at runtime.

Seems like I'm very slow today.. Could you present a couple of examples where 
such initialization(like UInt16(7)) can produce some unexpected behavior / 
error at runtime?

UIntN has unlabeled initializers taking all of the standard integer types, 
including itself.  The literal type will therefore get defaulted to Int.  The 
legal range of values for Int may not be a superset of the legal range of 
values for UIntN.  If the literal is in the legal range for an Int but not for 
the target type, this might trap at runtime.  Now, for a built-in integer type 
like UInt16, we will recognize that the coercion always traps and emit an error 
at compile-time, but this generally won't apply to other types.

John.


On 02.06.2016 19:08, John McCall via swift-evolution wrote:
The official way to build a literal of a specific type is to write the
literal in an explicitly-typed context, like so:
  let x: UInt16 = 7
or
  let x = 7 as UInt16

Nonetheless, programmers often try the following:
  UInt16(7)

Unfortunately, this does /not/ attempt to construct the value using the
appropriate literal protocol; it instead performs overload resolution using
the standard rules, i.e. considering only single-argument unlabelled
initializers of a type which conforms to IntegerLiteralConvertible.  Often
this leads to static ambiguities or, worse, causes the literal to be built
using a default type (such as Int); this may have semantically very
different results which are only caught at runtime.

In my opinion, using this initializer-call syntax to build an
explicitly-typed literal is an obvious and natural choice with several
advantages over the "as" syntax.  However, even if you disagree, it's clear
that programmers are going to continue to independently try to use it, so
it's really unfortunate for it to be subtly wrong.

Therefore, I propose that we adopt the following typing rule:

Given a function call expression of the form A(B) (that is, an
/expr-call/ with a single, unlabelled argument) where B is
an /expr-literal/ or /expr-collection/, if A has type T.Type for some type
T and there is a declared conformance of T to an appropriate literal
protocol for B, then the expression is always resolves as a literal
construction of type T (as if the expression were written "B as A") rather
than as a general initializer call.

Formally, this would be a special form of the argument conversion
constraint, since the type of the expression A may not be immediately known.

Note that, as specified, it is possible to suppress this typing rule by
wrapping the literal in parentheses.  This might seem distasteful; it would
be easy enough to allow the form of B to include extra parentheses.  It's
potentially useful to have a way to suppress this rule and get a normal
construction, but there are several other ways of getting that effect, such
as explicitly typing the literal argument (e.g. writing "A(Int(B))").

A conditional conformance counts as a declared conformance even if the
generic arguments are known to not satisfy the conditional conformance.
This permits the applicability of the rule to be decided without having to
first decide the type arguments, which greatly simplifies the type-checking
problem (and may be necessary for soundness; I didn't explore this in
depth, but it certainly feels like a very nasty sort of dependence).  We
could potentially weaken this for cases where A is a direct type reference
with bound parameters, e.g. Foo<Int>([]) or the same with a typealias, but
I think there's some benefit from having a simpler specification, both for
the implementation and for the explicability of the model.

John.


_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution




.

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to