Re: [swift-evolution] [Request for Feedback] Providing defaults for reading and writing.

2017-07-10 Thread Randy Eckenrode via swift-evolution
It seems like it would be cleaner to extend CodingKey. There might be a more 
general way of doing this than just requiring a Dictionary, but it seems to 
work.

protocol DefaultingCodingKey: CodingKey, Hashable {
static var defaults: [Self: Any] { get }
}

// Implementing the other overrides left as an exercise to the reader
extension KeyedDecodingContainer where Key: DefaultingCodingKey {

func decode(_ type: String.Type, forKey key: Key) throws -> String {
if let t = try self.decodeIfPresent(type, forKey: key) {
return t
} else {
return Swift.type(of: key).defaults[key] as! String
}
}

func decode(_ type: T.Type, forKey key: Key) throws -> T {
if let t = try self.decodeIfPresent(type, forKey: key) {
return t
} else {
return Swift.type(of: key).defaults[key] as! T
}
}

}

extension KeyedEncodingContainer where Key: DefaultingCodingKey {

mutating func encode(_ value: String, forKey key: Key) throws {
guard value != type(of: key).defaults[key] as! String else { return }
try self.encodeIfPresent(value, forKey: key)
}

mutating func encode(_ value: [T], forKey key: 
Key) throws {
guard value != type(of: key).defaults[key] as! [T] else { return }
try self.encodeIfPresent(value, forKey: key)
}

mutating func encode(_ value: T, forKey key: Key) 
throws {
guard value != type(of: key).defaults[key] as! T else { return }
try self.encodeIfPresent(value, forKey: key)
}

}

class ReferencePieceFromModel: Codable {

public var name: String = ""
public var styles: [String] = []

private enum CodingKeys: String, DefaultingCodingKey {

case name, styles

static let defaults: [CodingKeys: Any] = [
.name: "",
.styles: [String]()
]
}
}

Putting all of this into a playground….

let x = ReferencePieceFromModel()

let encoder = JSONEncoder()

let json = try! encoder.encode(x)
print(String(data: json, encoding: .utf8)!)

let decoder = JSONDecoder()

let a = try! decoder.decode(ReferencePieceFromModel.self, from: json)
print(a.name)
print(a.styles)

let refWithName = "{\"name\": \"Randy\"}"
let b = try! decoder.decode(ReferencePieceFromModel.self, from: 
refWithName.data(using: .utf8)!)
print(b.name)
print(b.styles)

let ref = "{\"name\": \"Randy\", \"styles\": [\"Swifty\"]}"
let c = try! decoder.decode(ReferencePieceFromModel.self, from: ref.data(using: 
.utf8)!)
print(c.name)
print(c.styles)

Prints out…

{}

[]
Randy
[]
Randy
["Swifty"]

-- 
Randy

> On Jul 10, 2017, at 8:16 PM, William Shipley via swift-evolution 
>  wrote:
> 
> Automatic substitution / removal of default values is very useful when 
> reading or writing a file, respectively, and should be supported by the 
>  family of protocols and objects:
> 
> • When reading, swapping in a default value for missing or corrupted values 
> makes it so hand-created or third-party-created files don’t have to write 
> every single value to make a valid file, and allows slightly corrupted files 
> to auto-repair (or get close, and let the user fix up any data that needs it 
> after) rather than completely fail to load. (Repairing on read creates a 
> virtuous cycle with user-created files, as the user will get _some_ feedback 
> on her input even if she’s messed up, for example, the type of one of the 
> properties.)
> 
> • When writing, providing a default value allows the container to skip keys 
> that don’t contain useful information. This can dramatically reduce file 
> sizes, but I think its other advantages are bigger wins: just like having 
> less source code makes a program easier to debug, having less “data code” 
> makes files easier to work with in every way — they’re easier to see 
> differences in, easier to determine corruption in, easier to edit by hand, 
> and easier to learn from.
> 
> 
> My first pass attempt at adding defaults to Codable looks like this:
> 
> 
> public class ReferencePieceFromModel : Codable {
> 
> // MARK: properties
> public let name: String = ""
> public let styles: [String] = []
> 
> 
> // MARK: 
> public required init(from decoder: Decoder) throws {
> let container = try decoder.container(keyedBy: CodingKeys.self)
> 
> self.name = container.decode(String.self, forKey: .name, defaults: 
> type(of: self).defaultsByCodingKey)
> self.styles = container.decode([String].self, forKey: .styles, 
> defaults: type(of: self).defaultsByCodingKey)
> }
> public func encode(to encoder: Encoder) throws {
> var container = encoder.container(keyedBy: CodingKeys.self)
> 
> try container.encode(name, forKey: .name, defaults: type(of: 
> self).defaultsByCodingKey)
> try container.encode(styles, forKey: .styles, defaults: type(of: 
> self).defaultsByCodingKey)
> }
> private static let 

[swift-evolution] [Pitch] Array full proposal

2017-07-10 Thread Daryle Walker via swift-evolution
Spent the past week coming up with a full proposal for fixed-size arrays. I 
wrote it mainly from the bottom upwards. There may be some inconsistencies. And 
I'm not entirely sure what "structural sub-typing" means, or if it's 
appropriate for arrays.



Sent from my iPad___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] JIT compilation for server-side Swift

2017-07-10 Thread Gábor Sebestyén via swift-evolution
Hi,

First of all let me welcome the project. My knowledge to JITs is little but
I come from Java world where JITs takes a major role. Let me share my
initial thoughts on this:

1. Runtime code optimization. Java JIT does this pretty much well. But how
can a Swift code already optimized compile time benefit from it?
2. Hot code swap. This is an interesting area. This feature would enable
developers rapid development by seeing their changes as soon as the server
JIT replaces modified code blocks.
3. Code injection. Java already enjoys this for like AOP, runtime
dependency injection, code instrumentation, etc.

Regards,

Gábor


Younes Manton via swift-evolution  ezt írta
(időpont: 2017. júl. 11., K, 0:02):

> On Mon, Jul 10, 2017 at 1:53 PM, Michael Ilseman 
> wrote:
>
>> On Jul 10, 2017, at 9:40 AM, Younes Manton via swift-evolution <
>> swift-evolution@swift.org> wrote:
>>
>> Having said that, it is with the static side in mind that I'm writing
>> this email. Despite the prototype JIT being built on OMR, the changes to
>> the static side outlined above are largely compiler agnostic APIs/ABIs that
>> anyone can use to build similar hybrid JITs or other runtime tools that
>> make sense for the server space.
>>
>> Do you have example APIs to discuss in more detail?
>>
>
> Yes, I've prepared patches for the 3 items I discussed in my initial
> email. I've rebased onto swift/master patches that we think are a decent
> starting point: a high level -enable-jit-support frontend option [1] and
> patchable function support.[2]
>
> Another patch (still based on Swift 3.0 because it needs to be implemented
> differently for master) for inserting in main() a call to an stdlib routine
> that will attempt to dlopen() an external "runtime" library, e.g. a JIT, is
> on another branch.[3] If ported to master as-is it would probably emit an
> apply to the stdlib routine at beginning of main() before argc/argv are
> captured. Having said that there are other ways to inject yourself into a
> process (I've been looking into LD_PRELOAD/exec(), for example, which
> wouldn't require changes to swiftc) so alternatives are welcome for
> discussion.
>
>
>> I think that there’s a lot of potential gains for runtime optimization of
>> Swift programs, but the vast majority of benefits will likely fall out from:
>>
>> 1. Smashing resilience barriers at runtime.
>> 2. Specializing frequently executed generic code, enabling subsequent
>> inlining and further optimization.
>>
>> These involve deep knowledge of Swift-specific semantics. They are
>> probably better handled by running Swift’s own optimizer at runtime rather
>> than teaching OMR or some other system about Swift. This is because Swift’s
>> SIL representation is constantly evolving, and the optimizations already in
>> the compiler are always up to date. I’m curious, what benefits of OMR are
>> you hoping to gain, and how does that weigh against the complexity of
>> making the two systems interact?
>>
>
> Yes, #1 and #2 are prime candidates.
>
> We're not so interested in retreading the same ground as the SIL optimizer
> if we can help it; ideally we would consume optimized SIL and be able to
> further optimize it without overlapping significantly with the SIL
> optimizer, but I think some level overlap and a non-trivial coupling with
> the SIL representation will be likely unfortunately.
>
> Having access to and being able to re-run the SIL optimizer at runtime,
> perhaps after feeding it runtime information and new constraints and
> thereby enabling opportunities that weren't available at build time is a
> naturally interesting idea. I haven't actually looked at that part of the
> Swift code base in detail, but I imagine it's not really in the form of an
> easily consumable library for an out-of-tree code base; our prototype
> re-used the SIL deserializer at runtime and that was painful and hacky so I
> imagine a similar experience with the SIL optimizer as it currently is.
>
> The benefits of the OMR compiler is that it is a JIT compiler first and
> foremost and has evolved over the years for that role. More practically,
> it's a code base we're much more familiar with so our knowledge currently
> goes a lot farther and it was a quicker path to prototyping something in a
> reasonable amount of time. The learning curve for Swift the language +
> swiftc & std libs + SIL was already a significant in and of itself. Having
> said that I fully recognize that there are obvious and natural reasons to
> consider a SIL optimizer + LLVM JIT in place of what we've been hacking
> away on. I don't think we're at a point where we can answer your last
> question, it might end up that a SIL-consuming out-of-tree compiler based
> on a different IL will have a hard time keeping up with Swift internals and
> will therefore not be able to do the sorts of things we think a JIT would
> excel at, but we're open to a little exploration to see how well it 

Re: [swift-evolution] [Request for Feedback] Providing defaults for reading and writing.

2017-07-10 Thread Greg Parker via swift-evolution

> On Jul 10, 2017, at 5:16 PM, William Shipley via swift-evolution 
>  wrote:
> 
> (Note the horrible hack on KeyedEncodingContainer where I had to special-case 
> arrays of s, I guess because the compiler doesn’t know an array of 
> s is Equatable itself?)

Correct. Swift does not yet have the necessary language machinery to express 
"Array is Equatable whenever T is Equatable". 

SE-0143 "Conditional conformances" is approved but not yet implemented.
https://github.com/apple/swift-evolution/blob/master/proposals/0143-conditional-conformances.md
 



-- 
Greg Parker gpar...@apple.com  Runtime 
Wrangler


___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Pitch] Guard/Catch

2017-07-10 Thread Greg Parker via swift-evolution

> On Jul 10, 2017, at 1:51 AM, David Hart via swift-evolution 
>  wrote:
> 
> I know we can’t do much about it now, but if optional binding had used the 
> same syntax as it does in pattern matching, we wouldn’t be having this 
> discussion:
> 
> guard let x = try doSomething() catch {
> // handle error
> }
> 
> guard let x? = doSomething() else {
> // handle when nil
> }

We tried pattern-matching syntax in `if let` a while ago. It was unbelievably 
unpopular. We changed it back.


-- 
Greg Parker gpar...@apple.com  Runtime 
Wrangler


___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


[swift-evolution] [Request for Feedback] Providing defaults for reading and writing.

2017-07-10 Thread William Shipley via swift-evolution
Automatic substitution / removal of default values is very useful when reading 
or writing a file, respectively, and should be supported by the  
family of protocols and objects:

• When reading, swapping in a default value for missing or corrupted values 
makes it so hand-created or third-party-created files don’t have to write every 
single value to make a valid file, and allows slightly corrupted files to 
auto-repair (or get close, and let the user fix up any data that needs it 
after) rather than completely fail to load. (Repairing on read creates a 
virtuous cycle with user-created files, as the user will get _some_ feedback on 
her input even if she’s messed up, for example, the type of one of the 
properties.)

• When writing, providing a default value allows the container to skip keys 
that don’t contain useful information. This can dramatically reduce file sizes, 
but I think its other advantages are bigger wins: just like having less source 
code makes a program easier to debug, having less “data code” makes files 
easier to work with in every way — they’re easier to see differences in, easier 
to determine corruption in, easier to edit by hand, and easier to learn from.


My first pass attempt at adding defaults to Codable looks like this:


public class ReferencePieceFromModel : Codable {

// MARK: properties
public let name: String = ""
public let styles: [String] = []


// MARK: 
public required init(from decoder: Decoder) throws {
let container = try decoder.container(keyedBy: CodingKeys.self)

self.name = container.decode(String.self, forKey: .name, defaults: 
type(of: self).defaultsByCodingKey)
self.styles = container.decode([String].self, forKey: .styles, 
defaults: type(of: self).defaultsByCodingKey)
}
public func encode(to encoder: Encoder) throws {
var container = encoder.container(keyedBy: CodingKeys.self)

try container.encode(name, forKey: .name, defaults: type(of: 
self).defaultsByCodingKey)
try container.encode(styles, forKey: .styles, defaults: type(of: 
self).defaultsByCodingKey)
}
private static let defaultsByCodingKey: [CodingKeys : Any] = [
.name : "",
.styles : [String]()
]


// MARK: private
private enum CodingKeys : String, CodingKey {
case name
case styles
}
}

With just a couple additions to the Swift libraries:

extension KeyedDecodingContainer where Key : Hashable {
func decode(_ type: T.Type, forKey key: Key, defaults: [Key : Any]) -> T 
where T : Decodable {
if let typedValueOptional = try? decodeIfPresent(T.self, forKey: key), 
let typedValue = typedValueOptional {
return typedValue
} else {
return defaults[key] as! T
}
}
}

extension KeyedEncodingContainer where Key : Hashable {
mutating func encode(_ value: T, forKey key: Key, defaults: [Key : Any]) 
throws where T : Encodable & Equatable {
if value != (defaults[key] as! T) {
try encode(value, forKey: key)
}
}

mutating func encode(_ value: [T], forKey key: Key, defaults: [Key : 
Any]) throws where T : Encodable & Equatable { // I AM SO SORRY THIS IS ALL I 
COULD FIGURE OUT TO MAKE [String] WORK!
if value != (defaults[key] as! [T]) {
try encode(value, forKey: key)
}
}
}


(Note the horrible hack on KeyedEncodingContainer where I had to special-case 
arrays of s, I guess because the compiler doesn’t know an array of 
s is Equatable itself?)


Problems with this technique I’ve identified are:

⑴ It doesn’t allow one to add defaults without manually writing the init(from:) 
and encode(to:), ugh.
⑵ The programmer has to add 'type(of: self).defaultsByCodingKey’ to every call, 
ugh.

Both of these could possibly be worked around if we could add an optional 
method to the  protocol, that would look something like: 

public static func default(keyedBy type: Key.Type, key: Key) -> Any? 
where Key : CodingKey

(the above line isn’t tested and doubtlessly won’t work as typed and has tons 
of think-os.)

This would get called by KeyedEncodingContainers and KeyedDecodingContainers 
only for keys that are Hashable (which I think is all keys, but you can stick 
un-keyed sub-things in Keyed containers and obviously those can’t have defaults 
just for them) and the container would be asked to do the comparison itself, 
with ‘==‘. 

Something I haven’t tried to address here is what to do if values are NOT 
 — then of course ‘==‘ won’t work. One approach to this would be to 
provide a way for the static func above to return ‘Hey, I don’t have anything 
meaningful for you for this particular property, because it’s not Equatable.’ 
This could be as simple as returning ‘nil’, which would also be a decent way to 
say, “This property has no meaningful default” which is also needed.

Alternatively, one could imagine adding TWO callbacks in the  for this 
kind of case, which are 

Re: [swift-evolution] JIT compilation for server-side Swift

2017-07-10 Thread Younes Manton via swift-evolution
On Mon, Jul 10, 2017 at 1:53 PM, Michael Ilseman  wrote:

>
> On Jul 10, 2017, at 9:40 AM, Younes Manton via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> Having said that, it is with the static side in mind that I'm writing this
> email. Despite the prototype JIT being built on OMR, the changes to the
> static side outlined above are largely compiler agnostic APIs/ABIs that
> anyone can use to build similar hybrid JITs or other runtime tools that
> make sense for the server space.
>
>
> Do you have example APIs to discuss in more detail?
>

Yes, I've prepared patches for the 3 items I discussed in my initial email.
I've rebased onto swift/master patches that we think are a decent starting
point: a high level -enable-jit-support frontend option [1] and patchable
function support.[2]

Another patch (still based on Swift 3.0 because it needs to be implemented
differently for master) for inserting in main() a call to an stdlib routine
that will attempt to dlopen() an external "runtime" library, e.g. a JIT, is
on another branch.[3] If ported to master as-is it would probably emit an
apply to the stdlib routine at beginning of main() before argc/argv are
captured. Having said that there are other ways to inject yourself into a
process (I've been looking into LD_PRELOAD/exec(), for example, which
wouldn't require changes to swiftc) so alternatives are welcome for
discussion.


> I think that there’s a lot of potential gains for runtime optimization of
> Swift programs, but the vast majority of benefits will likely fall out from:
>
> 1. Smashing resilience barriers at runtime.
> 2. Specializing frequently executed generic code, enabling subsequent
> inlining and further optimization.
>
> These involve deep knowledge of Swift-specific semantics. They are
> probably better handled by running Swift’s own optimizer at runtime rather
> than teaching OMR or some other system about Swift. This is because Swift’s
> SIL representation is constantly evolving, and the optimizations already in
> the compiler are always up to date. I’m curious, what benefits of OMR are
> you hoping to gain, and how does that weigh against the complexity of
> making the two systems interact?
>

Yes, #1 and #2 are prime candidates.

We're not so interested in retreading the same ground as the SIL optimizer
if we can help it; ideally we would consume optimized SIL and be able to
further optimize it without overlapping significantly with the SIL
optimizer, but I think some level overlap and a non-trivial coupling with
the SIL representation will be likely unfortunately.

Having access to and being able to re-run the SIL optimizer at runtime,
perhaps after feeding it runtime information and new constraints and
thereby enabling opportunities that weren't available at build time is a
naturally interesting idea. I haven't actually looked at that part of the
Swift code base in detail, but I imagine it's not really in the form of an
easily consumable library for an out-of-tree code base; our prototype
re-used the SIL deserializer at runtime and that was painful and hacky so I
imagine a similar experience with the SIL optimizer as it currently is.

The benefits of the OMR compiler is that it is a JIT compiler first and
foremost and has evolved over the years for that role. More practically,
it's a code base we're much more familiar with so our knowledge currently
goes a lot farther and it was a quicker path to prototyping something in a
reasonable amount of time. The learning curve for Swift the language +
swiftc & std libs + SIL was already a significant in and of itself. Having
said that I fully recognize that there are obvious and natural reasons to
consider a SIL optimizer + LLVM JIT in place of what we've been hacking
away on. I don't think we're at a point where we can answer your last
question, it might end up that a SIL-consuming out-of-tree compiler based
on a different IL will have a hard time keeping up with Swift internals and
will therefore not be able to do the sorts of things we think a JIT would
excel at, but we're open to a little exploration to see how well it works
out. At the very least the changes to the static side of the equation
are/will be useful to any other hybrid JIT or whatever other runtime tools
people can envision, so from the Swift community's perspective I hope there
will at least be some  benefits.

Thanks for taking the time.

[1]
https://github.com/ymanton/swift/commit/8f5f53c7398ba9bc38dd55c60871cfe3ded68d73
[2]
https://github.com/ymanton/swift/commit/54e7736788f716f1c896f9b0ad56b13bcd8eb136
[3]
https://github.com/ymanton/swift/commit/f59a232e176bd050373be00228437092706cc092
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] JIT compilation for server-side Swift

2017-07-10 Thread Younes Manton via swift-evolution
On Mon, Jul 10, 2017 at 1:31 PM, Benjamin Spratling 
wrote:

> I'm aware that Java re-compiles sections to make incremental performance
> improvements based on a statistical analysis of usage.  I'm not familiar
> enough with other uses of JIT on the backend to know what advantages it
> would have, beyond less time from begin compile to launch.  Could you list
> a few benefits?
>

The primary benefit is better performance by being able to exploit
knowledge about the program's behaviour and the hardware and software
environment it's running in that is only (or at least more easily and
accurately) available at runtime. In the case we're interested in "less
time from begin compile to launch" would not be a benefit, since we're
interested in JIT compiling an already built program.


> One of the goals of the Swift team appears to have been to achieve
> predictable performance, to the tune of finding object deallocations as too
> unpredictable.  So would you envision this as being an opt-in per compile?
>

Variations in behaviour and performance are, unfortunately, a common hazard
with JIT compilation. It depends on the implementation of the JIT in
question of course; if you put emphasis on predictability you can certainly
engineer a JIT that favours predictability over peak performance. I
personally think anyone embarking on this sort of thing would hopefully
keep their users' concerns in mind and try not to subvert the language's
design goals and principles if it can be helped.

Opt-in/out is an interesting idea (that I've personally considered more so
for my own debugging purposes). An annotation that works like the familiar
inline/alwaysinline/neverinline might be a useful with the hope being that
a JIT would hopefully do "the right thing" and free you from having to care.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Pitch] KeyPath based map, flatMap, filter

2017-07-10 Thread Dave Abrahams via swift-evolution

on Sun Jul 09 2017, Brent Royal-Gordon  wrote:

> But however we achieve it, I think a spoonful of
> syntactic sugar would help the medicine go down.

Let me be clear: syntactic sugar matters.  Otherwise, we'd all be
programming directly in LLVM IR.  It's just a question of what you have
to pay to get it.

>> By the way, if you're worried about whether subtyping will fly, I've
>> recently been thinking there might be a role for a “promotion” operator
>> that enables lossless “almost-implicit” conversions, e.g.:
>> 
>>someNumber^  is equivalent tonumericCast(someNumber)
>>\.someKeyPath^   is equivalent to{ $0\.someKeyPath }
>>someSubstring^   is equivalent toString(someSubstring)
>> 
>>etc.
>
> I actually played with something like this years ago (pre-open source,
> IIRC), but I used `^` as a prefix operator and made it support only
> widening conversions. But it was old code, and redoing it nerd-sniped
> me so hard that I kind of ended up making a whole GitHub project from
> it: 
>
> The main component is an `Upconvertible` protocol which encapsulates
> the conversion. That works really well in some ways, but it also
> creates some important limitations:
>
> 1. I had trouble incorporating downconversions in a reasonable
> way. Key paths in particular would require either compiler support or
> some really hacky, fragile code that opened up the closure context and
> pulled out the KeyPath object.
>
> 2. There's no good way to support more than one upconversion from a
> single type. (For instance, you can't make `UInt16` upconvert to both
> `Uint32` and `Int32`.)
>
> 3. Even if #2 were somehow fixed, you still can't make all
> `LosslessStringConvertible` types conform to `Upconvertible`.
>
> 4. Can't upconvert from a structural type, of course.

AFAICT, all of the above come down to having tried to build this idiom
around a protocol with an associated type.  I think of it as a
special-case syntactic shortcut for “value-preserving conversion to
deduced type.” Just overload the operator and be done with it.  Maybe
protocols like BinaryInteger should have a generic operator, but this
doesn't deserve a protocol of its own.

> 5. I wanted to support passing through any number of valid
> upconversions with a single `^` operator, but the only way I could
> find to do that was to overload the operator with a two-step version,
> a three-step version, etc.
>
> 6. Upconverting a `\.keyPath` expression caused an ambiguity error; I
> had to overload the operator to make it favor `KeyPath`. (Workaround
> code:
> https://github.com/brentdax/Upconvert/blob/master/Upconvert/Conformances/KeyPath.swift#L25)

Meh; that's a fact of life when overloading.

> Several-to-all of these could be avoided with a built-in language feature.

IMO no language feature is needed for this.

> As for the ergonomics…well, `people.map(^\.name)` definitely feels
> better than the closure alternative. But it's something you have to
> learn is possible, and even if you knew about `^` in the context of
> (say) numeric conversions, I'm not sure people would think to try it
> there. It basically means you need to know about three slightly
> esoteric features instead of two; I'm not sure people will discover
> that.

Yes, but there's a trade-off between discoverability, and introducing
more implicit conversions, which will slow down the type checker and can
make errors harder to understand.  Note: I'm not arguing for either
approach in particular.  They're just available alternatives.

-- 
-Dave
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] JIT compilation for server-side Swift

2017-07-10 Thread Michael Ilseman via swift-evolution

> On Jul 10, 2017, at 9:40 AM, Younes Manton via swift-evolution 
>  wrote:
> 
> Hi,
> 
> Last year a small group of developers from the IBM Runtimes compiler team 
> undertook a project to explore JIT compilation for Swift, primarily aimed at 
> server-side Swift. The compilation model we settled on was a hybrid approach 
> that combined static compilation via swiftc with dynamic compilation via a 
> prototype JIT compiler based on Eclipse OMR.[1]
> 
> This prototype JIT compiler (targeting Linux specifically) functioned by 
> having itself loaded by a Swift process at runtime, patching Swift functions 
> so that they may be intercepted, recompiling them from their SIL 
> representations, and redirecting callers to the JIT compiled version. In 
> order to accomplish this we needed to make some changes to the static 
> compiler and the target program's build process.
> 
> * First, we modified the compiler to emit code at the beginning of main() 
> that will attempt to dlopen() the JIT compiler, and if successful, call its 
> initialization routine. If unsuccessful the program would simply carry on 
> executing the rest of main().
> 
> * Second, we modified all Swift functions to be patchable by giving them the 
> "patchable-function" LLVM attribute (making the first instruction suitable to 
> be patched over with a short jump) and attaching 32 bytes of prefix data 
> (suitable to hold a long jump to a JIT hook function and some extra data) to 
> the function's code. This was controlled by a frontend "-enable-jit" switch.
> 
> * Third, when building the target program we first compiled the Swift sources 
> to a .sib (binary SIL) file, then via ld and objcopy turned the .sib into a 
> .o containing a .sib data section, then compiled the sources again into an 
> executable, this time linking with the .o containing the binary SIL. This 
> embedded SIL is what was consumed at runtime by the JIT compiler in order to 
> recompile Swift functions on the fly. (Ideally this step would be done by the 
> static compiler itself (and is not unlike the embedding of LLVM bitcode in a 
> .llvmbc section), but that would have been a significant undertaking so for 
> prototyping purposes we did it at target program build time.)
> 
> That's the brief, high level description of what we did, particularly as it 
> relates to the static side of this hybrid approach. The resulting prototype 
> JIT was able to run and fully recompile a non-trivial (but constrained) 
> program at comparable performance to the purely static version. For anyone 
> interested in more details about the project as a whole, including how the 
> prototype JIT functioned, the overhead it introduced, and the quality of code 
> it emitted, I'll point you to Mark Stoodley's recent tech talk.[2]
> 
> Having said that, it is with the static side in mind that I'm writing this 
> email. Despite the prototype JIT being built on OMR, the changes to the 
> static side outlined above are largely compiler agnostic APIs/ABIs that 
> anyone can use to build similar hybrid JITs or other runtime tools that make 
> sense for the server space.

Do you have example APIs to discuss in more detail?

> As such, we felt that it was a topic that was worth discussing early and in 
> public in order to allow any and all potentially interested parties an 
> opportunity to weigh in. With this email we wanted to introduce ourselves to 
> the wider Swift community and solicit feedback on 1) the general idea of JIT 
> compilation for server-side Swift, 2) the hybrid approach in particular, and 
> 3) the changes mentioned above and future work in the static compiler to 
> facilitate 1) and 2). To that end, we'd be happy to take questions and 
> welcome any discussion on this subject.
> 

I think that there’s a lot of potential gains for runtime optimization of Swift 
programs, but the vast majority of benefits will likely fall out from:

1. Smashing resilience barriers at runtime.
2. Specializing frequently executed generic code, enabling subsequent inlining 
and further optimization.

These involve deep knowledge of Swift-specific semantics. They are probably 
better handled by running Swift’s own optimizer at runtime rather than teaching 
OMR or some other system about Swift. This is because Swift’s SIL 
representation is constantly evolving, and the optimizations already in the 
compiler are always up to date. I’m curious, what benefits of OMR are you 
hoping to gain, and how does that weigh against the complexity of making the 
two systems interact?



> (As for the prototype itself, we intend to open source it either in its 
> current state [based on Swift 3.0 and an early version of OMR] or in a more 
> up-to-date state in the very near future.)
> 
> Thank you kindly,
> Younes Manton
> 
> [1] http://www.eclipse.org/omr/  & 
> https://github.com/eclipse/omr 
> [2] 

Re: [swift-evolution] JIT compilation for server-side Swift

2017-07-10 Thread Michael Ilseman via swift-evolution
No, this is completely unrelated. This is about runtime optimization of 
already-running swift programs.


> On Jul 10, 2017, at 10:40 AM, Jacob Williams via swift-evolution 
>  wrote:
> 
> Pardon my lack of knowledge about JIT compilation, but does this open the 
> realm of possibilities to a client-side swift that would allow web developers 
> to write swift code rather than javascript?
> 
>> On Jul 10, 2017, at 10:40 AM, Younes Manton via swift-evolution 
>> > wrote:
>> 
>> Hi,
>> 
>> Last year a small group of developers from the IBM Runtimes compiler team 
>> undertook a project to explore JIT compilation for Swift, primarily aimed at 
>> server-side Swift. The compilation model we settled on was a hybrid approach 
>> that combined static compilation via swiftc with dynamic compilation via a 
>> prototype JIT compiler based on Eclipse OMR.[1]
>> 
>> This prototype JIT compiler (targeting Linux specifically) functioned by 
>> having itself loaded by a Swift process at runtime, patching Swift functions 
>> so that they may be intercepted, recompiling them from their SIL 
>> representations, and redirecting callers to the JIT compiled version. In 
>> order to accomplish this we needed to make some changes to the static 
>> compiler and the target program's build process.
>> 
>> * First, we modified the compiler to emit code at the beginning of main() 
>> that will attempt to dlopen() the JIT compiler, and if successful, call its 
>> initialization routine. If unsuccessful the program would simply carry on 
>> executing the rest of main().
>> 
>> * Second, we modified all Swift functions to be patchable by giving them the 
>> "patchable-function" LLVM attribute (making the first instruction suitable 
>> to be patched over with a short jump) and attaching 32 bytes of prefix data 
>> (suitable to hold a long jump to a JIT hook function and some extra data) to 
>> the function's code. This was controlled by a frontend "-enable-jit" switch.
>> 
>> * Third, when building the target program we first compiled the Swift 
>> sources to a .sib (binary SIL) file, then via ld and objcopy turned the .sib 
>> into a .o containing a .sib data section, then compiled the sources again 
>> into an executable, this time linking with the .o containing the binary SIL. 
>> This embedded SIL is what was consumed at runtime by the JIT compiler in 
>> order to recompile Swift functions on the fly. (Ideally this step would be 
>> done by the static compiler itself (and is not unlike the embedding of LLVM 
>> bitcode in a .llvmbc section), but that would have been a significant 
>> undertaking so for prototyping purposes we did it at target program build 
>> time.)
>> 
>> That's the brief, high level description of what we did, particularly as it 
>> relates to the static side of this hybrid approach. The resulting prototype 
>> JIT was able to run and fully recompile a non-trivial (but constrained) 
>> program at comparable performance to the purely static version. For anyone 
>> interested in more details about the project as a whole, including how the 
>> prototype JIT functioned, the overhead it introduced, and the quality of 
>> code it emitted, I'll point you to Mark Stoodley's recent tech talk.[2]
>> 
>> Having said that, it is with the static side in mind that I'm writing this 
>> email. Despite the prototype JIT being built on OMR, the changes to the 
>> static side outlined above are largely compiler agnostic APIs/ABIs that 
>> anyone can use to build similar hybrid JITs or other runtime tools that make 
>> sense for the server space. As such, we felt that it was a topic that was 
>> worth discussing early and in public in order to allow any and all 
>> potentially interested parties an opportunity to weigh in. With this email 
>> we wanted to introduce ourselves to the wider Swift community and solicit 
>> feedback on 1) the general idea of JIT compilation for server-side Swift, 2) 
>> the hybrid approach in particular, and 3) the changes mentioned above and 
>> future work in the static compiler to facilitate 1) and 2). To that end, 
>> we'd be happy to take questions and welcome any discussion on this subject.
>> 
>> (As for the prototype itself, we intend to open source it either in its 
>> current state [based on Swift 3.0 and an early version of OMR] or in a more 
>> up-to-date state in the very near future.)
>> 
>> Thank you kindly,
>> Younes Manton
>> 
>> [1] http://www.eclipse.org/omr/  & 
>> https://github.com/eclipse/omr 
>> [2] http://www.ustream.tv/recorded/105013815 
>>  (Swift JIT starts at ~28:20)
>> 
>> ___
>> swift-evolution mailing list
>> swift-evolution@swift.org 
>> https://lists.swift.org/mailman/listinfo/swift-evolution
> 
> 

Re: [swift-evolution] JIT compilation for server-side Swift

2017-07-10 Thread Jacob Williams via swift-evolution
Pardon my lack of knowledge about JIT compilation, but does this open the realm 
of possibilities to a client-side swift that would allow web developers to 
write swift code rather than javascript?

> On Jul 10, 2017, at 10:40 AM, Younes Manton via swift-evolution 
>  wrote:
> 
> Hi,
> 
> Last year a small group of developers from the IBM Runtimes compiler team 
> undertook a project to explore JIT compilation for Swift, primarily aimed at 
> server-side Swift. The compilation model we settled on was a hybrid approach 
> that combined static compilation via swiftc with dynamic compilation via a 
> prototype JIT compiler based on Eclipse OMR.[1]
> 
> This prototype JIT compiler (targeting Linux specifically) functioned by 
> having itself loaded by a Swift process at runtime, patching Swift functions 
> so that they may be intercepted, recompiling them from their SIL 
> representations, and redirecting callers to the JIT compiled version. In 
> order to accomplish this we needed to make some changes to the static 
> compiler and the target program's build process.
> 
> * First, we modified the compiler to emit code at the beginning of main() 
> that will attempt to dlopen() the JIT compiler, and if successful, call its 
> initialization routine. If unsuccessful the program would simply carry on 
> executing the rest of main().
> 
> * Second, we modified all Swift functions to be patchable by giving them the 
> "patchable-function" LLVM attribute (making the first instruction suitable to 
> be patched over with a short jump) and attaching 32 bytes of prefix data 
> (suitable to hold a long jump to a JIT hook function and some extra data) to 
> the function's code. This was controlled by a frontend "-enable-jit" switch.
> 
> * Third, when building the target program we first compiled the Swift sources 
> to a .sib (binary SIL) file, then via ld and objcopy turned the .sib into a 
> .o containing a .sib data section, then compiled the sources again into an 
> executable, this time linking with the .o containing the binary SIL. This 
> embedded SIL is what was consumed at runtime by the JIT compiler in order to 
> recompile Swift functions on the fly. (Ideally this step would be done by the 
> static compiler itself (and is not unlike the embedding of LLVM bitcode in a 
> .llvmbc section), but that would have been a significant undertaking so for 
> prototyping purposes we did it at target program build time.)
> 
> That's the brief, high level description of what we did, particularly as it 
> relates to the static side of this hybrid approach. The resulting prototype 
> JIT was able to run and fully recompile a non-trivial (but constrained) 
> program at comparable performance to the purely static version. For anyone 
> interested in more details about the project as a whole, including how the 
> prototype JIT functioned, the overhead it introduced, and the quality of code 
> it emitted, I'll point you to Mark Stoodley's recent tech talk.[2]
> 
> Having said that, it is with the static side in mind that I'm writing this 
> email. Despite the prototype JIT being built on OMR, the changes to the 
> static side outlined above are largely compiler agnostic APIs/ABIs that 
> anyone can use to build similar hybrid JITs or other runtime tools that make 
> sense for the server space. As such, we felt that it was a topic that was 
> worth discussing early and in public in order to allow any and all 
> potentially interested parties an opportunity to weigh in. With this email we 
> wanted to introduce ourselves to the wider Swift community and solicit 
> feedback on 1) the general idea of JIT compilation for server-side Swift, 2) 
> the hybrid approach in particular, and 3) the changes mentioned above and 
> future work in the static compiler to facilitate 1) and 2). To that end, we'd 
> be happy to take questions and welcome any discussion on this subject.
> 
> (As for the prototype itself, we intend to open source it either in its 
> current state [based on Swift 3.0 and an early version of OMR] or in a more 
> up-to-date state in the very near future.)
> 
> Thank you kindly,
> Younes Manton
> 
> [1] http://www.eclipse.org/omr/  & 
> https://github.com/eclipse/omr 
> [2] http://www.ustream.tv/recorded/105013815 
>  (Swift JIT starts at ~28:20)
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


[swift-evolution] JIT compilation for server-side Swift

2017-07-10 Thread Younes Manton via swift-evolution
Hi,

Last year a small group of developers from the IBM Runtimes compiler team
undertook a project to explore JIT compilation for Swift, primarily aimed
at server-side Swift. The compilation model we settled on was a hybrid
approach that combined static compilation via swiftc with dynamic
compilation via a prototype JIT compiler based on Eclipse OMR.[1]

This prototype JIT compiler (targeting Linux specifically) functioned by
having itself loaded by a Swift process at runtime, patching Swift
functions so that they may be intercepted, recompiling them from their SIL
representations, and redirecting callers to the JIT compiled version. In
order to accomplish this we needed to make some changes to the static
compiler and the target program's build process.

* First, we modified the compiler to emit code at the beginning of main()
that will attempt to dlopen() the JIT compiler, and if successful, call its
initialization routine. If unsuccessful the program would simply carry on
executing the rest of main().

* Second, we modified all Swift functions to be patchable by giving them
the "patchable-function" LLVM attribute (making the first instruction
suitable to be patched over with a short jump) and attaching 32 bytes of
prefix data (suitable to hold a long jump to a JIT hook function and some
extra data) to the function's code. This was controlled by a frontend
"-enable-jit" switch.

* Third, when building the target program we first compiled the Swift
sources to a .sib (binary SIL) file, then via ld and objcopy turned the
.sib into a .o containing a .sib data section, then compiled the sources
again into an executable, this time linking with the .o containing the
binary SIL. This embedded SIL is what was consumed at runtime by the JIT
compiler in order to recompile Swift functions on the fly. (Ideally this
step would be done by the static compiler itself (and is not unlike the
embedding of LLVM bitcode in a .llvmbc section), but that would have been a
significant undertaking so for prototyping purposes we did it at target
program build time.)

That's the brief, high level description of what we did, particularly as it
relates to the static side of this hybrid approach. The resulting prototype
JIT was able to run and fully recompile a non-trivial (but constrained)
program at comparable performance to the purely static version. For anyone
interested in more details about the project as a whole, including how the
prototype JIT functioned, the overhead it introduced, and the quality of
code it emitted, I'll point you to Mark Stoodley's recent tech talk.[2]

Having said that, it is with the static side in mind that I'm writing this
email. Despite the prototype JIT being built on OMR, the changes to the
static side outlined above are largely compiler agnostic APIs/ABIs that
anyone can use to build similar hybrid JITs or other runtime tools that
make sense for the server space. As such, we felt that it was a topic that
was worth discussing early and in public in order to allow any and all
potentially interested parties an opportunity to weigh in. With this email
we wanted to introduce ourselves to the wider Swift community and solicit
feedback on 1) the general idea of JIT compilation for server-side Swift,
2) the hybrid approach in particular, and 3) the changes mentioned above
and future work in the static compiler to facilitate 1) and 2). To that
end, we'd be happy to take questions and welcome any discussion on this
subject.

(As for the prototype itself, we intend to open source it either in its
current state [based on Swift 3.0 and an early version of OMR] or in a more
up-to-date state in the very near future.)

Thank you kindly,
Younes Manton

[1] http://www.eclipse.org/omr/ & https://github.com/eclipse/omr
[2] http://www.ustream.tv/recorded/105013815 (Swift JIT starts at ~28:20)
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Pitch] Guard/Catch

2017-07-10 Thread Elviro Rocca via swift-evolution
I don't think that "guard let x? = ..." would be syntactically correct, AFAIK 
for matching the pattern "let x? =", as for any other pattern, you need to use 
"case", in fact the following is perfectly valid:

guard case let x? = doSomething() else {
// handle when nil
}

I think that "guard let x =" is in fact sugar for "guard case let x? =", so 
your example should really be:

guard case let x? = try doSomething() catch {
// handle error
} else {
// handle when nil
}

This would basically mean "add an extra catch when pattern-matching with a 
throwing function", same as:

switch try doSomething() {
case let x?:
// handle
case nil:
// handle   
} catch {
// handle
}

Honestly I'm still not sold in conflating the two things. I don't think it 
would be problematic to clearly separate cases when I'm binding the result of a 
throwing function (which in fact is isomorphic to a Either) and 
binding an Optional.


Elviro

> Il giorno 10 lug 2017, alle ore 10:44, David Hart  ha 
> scritto:
> 
>> 
>> On 10 Jul 2017, at 09:45, Elviro Rocca via swift-evolution 
>> > wrote:
>> 
>> This is not a sugar proposal, in the same way as "guard" is not syntactic 
>> sugar, because it requires exiting the scope on the else branch, adding 
>> expressive power and safety to the call: also, the sugary part is pretty 
>> important because it avoids nested parentheses and very clearly states that 
>> if the guard condition is not fulfilled, the execution will not reach the 
>> next lines of code. Guard is useful to push the programmer to at least 
>> consider an early return instead of branching code paths, to achieve better 
>> clarity, readability and lower complexity, and I suspect is one of the best 
>> Swift features for many people.
>> 
>> Also, the case that the proposal aims to cover is not an edge case at all 
>> for a lot of people, including me. Rethrowing an error is something that I 
>> almost never do, and I consider the "umbrella" do/catch at the top of the 
>> call stack an anti-pattern, but I understand that many people like it and 
>> I'm not arguing against it. I am arguing in favor of having options and not 
>> pushing a particular style onto programmers, and for my (and many people's) 
>> style, a guard/catch with forced return is an excellent idea. In fact you 
>> seem to agree on the necessity of some kind of forced-returnish catch but 
>> your elaborations don't seem (to me) much better than the proposal itself.
>> 
>> Dave DeLong raised the point of weird behavior in the case of a function 
>> like:
>> 
>> 
>> func doSomething() throws → Result? { … }
>> 
>> 
>> In this case, what would the type of x be?
>> 
>> 
>> guard let x = try doSomething() catch { /// handle and return }
> 
> I know we can’t do much about it now, but if optional binding had used the 
> same syntax as it does in pattern matching, we wouldn’t be having this 
> discussion:
> 
> guard let x = try doSomething() catch {
> // handle error
> }
> 
> guard let x? = doSomething() else {
> // handle when nil
> }
> 
> And mixing both would be a bit cleaner because the ? would make it explicit 
> we are doing optional binding:
> 
> guard let x? = try doSomething() catch {
> // handle error
> } else {
> // handle when nil
> }
> 
>> Simple, it would be Optional. I don't find this confusing at all, 
>> and if the idea that just by seeing "guard let" we should expect a 
>> non-Optional is somehow diffused, I think it's better to eradicate it.
>> 
>> First of all, if I'm returning an optional from a throwing function, it's 
>> probably the case that I want the Optional to be there in the returned 
>> value: the only reason why I would consider doing that is if the semantics 
>> of Optional are pretty meaningful in that case. For example, when parsing a 
>> JSON in which I expect a String or null to be at a certain key:
>> 
>> 
>> extension String: Error {}
>> 
>> func parseString(in dict: [String:Any], at key: String) throws -> String? {
>>  guard let x = dict[key] else { throw "No value found at '\(key)' in 
>> \(dict)" }
>>  if let x = x as? String { return x }
>>  if let x = x as? NSNull { return nil }
>>  throw "Value at '\(key)' in \(dict) is not 'string' or 'null"
>> }
>> 
>> 
>> Thus, if I'm returning an Optional from a throwing function it means that I 
>> want to clearly distinguish the two cases, so they shouldn't be collapsed in 
>> a single call:
>> 
>> 
>> guard let x = try doSomething() catch { /// handle and return }
>> guard let x = x else { /// handle and return }
>> 
>> 
>> Also, if a function returns something like "Int??", a guard-let (or if-let) 
>> on the returned value of that function will still bind an "Int?", thus 
>> unwrapping only "one level" of optional. If-let and guard-let, as of today, 
>> just unwrap a single optional level, an do not guaranteed at all 

Re: [swift-evolution] [Pre-pitch] Allowing enums inside protocols?

2017-07-10 Thread Akshay Hegde via swift-evolution
Wouldn’t the following work just as well for providing a namespace?

struct Foozle {
enum Errors: Error {
case malformedFoozle
case tooManyFoozles
}
}

~Akshay

> On Jul 8, 2017, at 17:24, Jonathan Hull via swift-evolution 
>  wrote:
> 
> I *really* want this as well.
> 
> I think there was a serious proposal to do this early in Swift 4.  Not sure 
> why it stalled, but I seem to remember it being technically possible.
> 
> Thanks,
> Jon
> 
>> On Jul 8, 2017, at 4:21 PM, William Shipley via swift-evolution 
>>  wrote:
>> 
>> Does anyone know if there's some good tech reason to not allow, like:
>> 
>> protocol Foozle {
>>  enum Errors: Error {
>>  case malformedFoozle
>>  case tooManyFoozles
>>  }
>> }
>> 
>> Like, to me all this is doing is giving “Errors” a nice namespace, but the 
>> compiler might have other thoughts.
>> 
>> -W
>> ___
>> swift-evolution mailing list
>> swift-evolution@swift.org
>> https://lists.swift.org/mailman/listinfo/swift-evolution
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


[swift-evolution] [Pitch] Guard/Catch

2017-07-10 Thread Richard Adem via swift-evolution
One of the reasons I like this proposal is at a higher level we are
checking for errors and have the ability to exit early if there is an
error, I think this aligns well with what `guard` represents, at least to
me. Using `try?` effectively ignores the error so if we want convenience we
have to give up accuracy of the error handling.

It also encourages error handling, if we have a function that uses the
result of a throwing function it's very easy to do a `guard` on `try?` and
return a generic error instead of putting the function in a do/catch block.
With this proposal we can easily put the specific error handing in the
catch block.

The proposal does also changes the concept of guard somewhat, where now it
is tied directly to conditionals and `guard/else` is the core concept. The
proposal takes out `guard` as the core of and early exit statement the
result of it failing it up to how we define our handing of it.

> The obvious problem is that `guard let` in Swift is closely associated
with
> optional unwrapping. The reader is right to expect a non-optional `result`
> with `guard let` regardless of the word that comes after conditional
> expression.

Yes this does change that idea, but `guard let` is always accompanied by
else, additionally because we also use guard along with other conditionals
`guard x > 0 else {}` there isn't a strong tie of guard to optionals. The
interfered result type is set by the entire line much like `let a = x > 0 ?
"higher" : "lower"`

> The nesting and ceremony, to me, were part of Swift’s philosophy of
making error handling explicit.  Merging catch blocks into guards saves you
maybe 3-10 lines if you intended to actually handle the error(s), otherwise
this effectively try?’s  into a failable pattern match.  At which point,
you have to wonder if the error-throwing function you wrote wouldn’t be
better off just returning an Optional if you’re going to discard the
semantic content of the error.

I think this proposal goes beyond convenience as we are stating that we
want the returned value to be at the same scope as code that uses it.

If we want to save lines we can do that anyway with something like:
```
func divide(x: Int, y: Int) throws -> Int {...}
let z: Int; do { z = try divide(x: 10, y: 5) } catch {
print(error)
return
}
```

But we always want `z` to be the result of `divide(x:y:)` having it outside
the do/catch looks like we may intend it to be set by something else. With
this proposal we are tying `z` to the result of `divide(x:y:)`.

Another way could be to use a closure

```
let z:Int? = {
do {
return try divide(x: x, y: y)
} catch {
print(error)
return nil
}
}()
```
but then we have to use an optional in the case where `divide(x:y:)` does
not return one and adding another check for the optional later.

I think separating guard/else and guard/catch is a great idea because we
wouldn't have the issue of remembering what order else and catch should be
in. We can think about guard as guard/[handler]

The prior thread and in this thread there as been talk of a version without
the `guard`. This feels more like an extension of the try keyword, which
doesn't sound to bad to me. It handles the case of keeping the result in
the current scope, but it just doesn't group code in a exit early style.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


[swift-evolution] [Proposal] Refine SE-0015 (Tuple Comparison Operators) to include empty case

2017-07-10 Thread Arjun Nayini via swift-evolution
I propose adding support for using comparison operators on the empty tuple.  
This is an extension of SE-0015.
 

 

This fixes SR-4172  and the use case is 
described in the comments of the bug.

The PR has been approved and lives here: 
https://github.com/apple/swift/pull/8354 
.



___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


[swift-evolution] Better handling of enum cases with associated values

2017-07-10 Thread Sash Zats via swift-evolution
Hi, I wanted to propose a better handling of enum cases with associated value
(somewhat) detailed proposal is here 
https://github.com/zats/swift-evolution/blob/master/proposals/0181-better-handling-of-enum-cases-with-associated-values.md
 

I'm not sure I can suggest good detailed implementation since I'm not too 
familiar with Swift internals
But i'm sure it's something I can research if proposal makes sense to community
Since I havne't communicated through this mailing list, I'm not sure what's the 
etiquette, let me know if something is missing from the proposal.___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Proposal] Introduces endianness specific type

2017-07-10 Thread Susan Cheng via swift-evolution
Thanks, but we can implement Codable for BEInteger and LEInteger types.


public struct BEInteger : FixedWidthInteger
{



public var bitPattern: BitPattern



public init(bitPattern: BitPattern)



public var bigEndian: BEInteger { get }



public var littleEndian: LEInteger { get }

}


public struct LEInteger : FixedWidthInteger
{



public var bitPattern: BitPattern



public init(bitPattern: BitPattern)



public var bigEndian: BEInteger { get }



public var littleEndian: LEInteger { get }

}


extension BEInteger : Encodable where BitPattern : Encodable {



public func encode(to encoder: Encoder) throws {

try self.bitPattern.encode(to: encoder)

}

}


extension BEInteger : Decodable where BitPattern : Decodable {



public init(from decoder: Decoder) throws {

self.init(bitPattern: try BitPattern(from: decoder))

}

}


extension LEInteger : Encodable where BitPattern : Encodable {



public func encode(to encoder: Encoder) throws {

try self.bitPattern.encode(to: encoder)

}

}


extension LEInteger : Decodable where BitPattern : Decodable {



public init(from decoder: Decoder) throws {

self.init(bitPattern: try BitPattern(from: decoder))

}

}


2017-07-09 0:27 GMT+08:00 Chris Lattner :

> Hi Susan,
>
> Swift does not currently specify a layout for Swift structs.  You
> shouldn’t be using them for memory mapped i/o or writing to a file, because
> their layout can change.  When ABI stability for fragile structs lands, you
> will be able to count on it, but until then something like this is probably
> a bad idea.
>
> -Chris
>
> On Jul 7, 2017, at 6:16 PM, Susan Cheng via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> Here are two problems being fixed.
>
> First, considering the example:
>
> struct MyRawDataStruct {
>
>   var size: UInt32
>   var signature: UInt32
>   var width: UInt32
>   var height: UInt32
> }
>
> The type UInt32 doesn't tall us the endianness of the value. Also, if we
> read the value of it, the value is being byte-swapped when endianness is
> not matching with the system.
>
> This causes us have to manual convert the value from/to correct endianness.
>
> struct MyRawDataStruct {
>
>   var size: BEInteger
>   var signature: BEInteger
>   var width: BEInteger
>   var height: BEInteger
> }
>
> So, my proposal fix the problem. We can easily to get the value.
>
> let header: MyRawDataStruct = data.withUnsafePointer { $0.pointee }
>
> print(header.size)  // print the representing value
>
> Second, it's misleading means of bigEndian and littleEndian from
> FixedWidthInteger
>
> if we do this
>
> let a = 1
>
> print(a.bigEndian.bigEndian)
>
> It's just swap bytes twice but not converting value to big-endian
>
> My proposal solves the problem
>
> let b = a.bigEndian   //BEInteger
>
> b.bigEndian// remain big-endian of a
>
> Max Moiseev  於 2017年7月8日 上午1:48 寫道:
>
> Hi Susan,
>
> Was there any motivation for this proposal that I missed? If not then, can
> you please provide it in a few sentences? Otherwise it’s not clear to me
> what problem it is supposed to fix.
>
> Thanks,
> Max
>
>
> On Jul 6, 2017, at 8:21 PM, Susan Cheng via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> IMO, it has unclear representation when FixedWidthInteger working with
> endianness specific type.
>
> so I want to introduce the endianness specific wrapper:
>
> public struct BEInteger : FixedWidthInteger {
>
> public var bigEndian: BEInteger { get }
>
> public var littleEndian: LEInteger { get }
> }
>
> public struct LEInteger : FixedWidthInteger {
>
> public var bigEndian: BEInteger { get }
>
> public var littleEndian: LEInteger { get }
> }
>
> also, we should change the FixedWidthInteger as follow:
>
> public protocol FixedWidthInteger : BinaryInteger {
>
> /// deprecated, we should use value.bigEndian instead
> init(bigEndian value: Self)
>
> /// deprecated, we should use value.littleEndian instead
> init(littleEndian value: Self)
>
> associatedtype EndianRepresentingValue : FixedWidthInteger
>
> var bigEndian: BEInteger { get }
>
> var littleEndian: LEInteger { get }
>
> }
>
> =
>
> this is my working alternative implementation:
>
>
> @_versioned
> protocol EndianInteger : FixedWidthInteger {
>
> associatedtype BitPattern : FixedWidthInteger
>
> associatedtype RepresentingValue : FixedWidthInteger
>
> var bitPattern: BitPattern { get }
>
> init(bitPattern: BitPattern)
>
> var representingValue : RepresentingValue { get set }
>
> init(representingValue: RepresentingValue)
> }
>
> extension EndianInteger {
>
> @_transparent
> public init(integerLiteral value: RepresentingValue.IntegerLiteralType)
> {
> self.init(representingValue: RepresentingValue(integerLiteral:
> value))
> }
>
> 

Re: [swift-evolution] [Pitch] Guard/Catch

2017-07-10 Thread David Hart via swift-evolution

> On 10 Jul 2017, at 09:45, Elviro Rocca via swift-evolution 
>  wrote:
> 
> This is not a sugar proposal, in the same way as "guard" is not syntactic 
> sugar, because it requires exiting the scope on the else branch, adding 
> expressive power and safety to the call: also, the sugary part is pretty 
> important because it avoids nested parentheses and very clearly states that 
> if the guard condition is not fulfilled, the execution will not reach the 
> next lines of code. Guard is useful to push the programmer to at least 
> consider an early return instead of branching code paths, to achieve better 
> clarity, readability and lower complexity, and I suspect is one of the best 
> Swift features for many people.
> 
> Also, the case that the proposal aims to cover is not an edge case at all for 
> a lot of people, including me. Rethrowing an error is something that I almost 
> never do, and I consider the "umbrella" do/catch at the top of the call stack 
> an anti-pattern, but I understand that many people like it and I'm not 
> arguing against it. I am arguing in favor of having options and not pushing a 
> particular style onto programmers, and for my (and many people's) style, a 
> guard/catch with forced return is an excellent idea. In fact you seem to 
> agree on the necessity of some kind of forced-returnish catch but your 
> elaborations don't seem (to me) much better than the proposal itself.
> 
> Dave DeLong raised the point of weird behavior in the case of a function like:
> 
> 
> func doSomething() throws → Result? { … }
> 
> 
> In this case, what would the type of x be?
> 
> 
> guard let x = try doSomething() catch { /// handle and return }

I know we can’t do much about it now, but if optional binding had used the same 
syntax as it does in pattern matching, we wouldn’t be having this discussion:

guard let x = try doSomething() catch {
// handle error
}

guard let x? = doSomething() else {
// handle when nil
}

And mixing both would be a bit cleaner because the ? would make it explicit we 
are doing optional binding:

guard let x? = try doSomething() catch {
// handle error
} else {
// handle when nil
}

> Simple, it would be Optional. I don't find this confusing at all, and 
> if the idea that just by seeing "guard let" we should expect a non-Optional 
> is somehow diffused, I think it's better to eradicate it.
> 
> First of all, if I'm returning an optional from a throwing function, it's 
> probably the case that I want the Optional to be there in the returned value: 
> the only reason why I would consider doing that is if the semantics of 
> Optional are pretty meaningful in that case. For example, when parsing a JSON 
> in which I expect a String or null to be at a certain key:
> 
> 
> extension String: Error {}
> 
> func parseString(in dict: [String:Any], at key: String) throws -> String? {
>   guard let x = dict[key] else { throw "No value found at '\(key)' in 
> \(dict)" }
>   if let x = x as? String { return x }
>   if let x = x as? NSNull { return nil }
>   throw "Value at '\(key)' in \(dict) is not 'string' or 'null"
> }
> 
> 
> Thus, if I'm returning an Optional from a throwing function it means that I 
> want to clearly distinguish the two cases, so they shouldn't be collapsed in 
> a single call:
> 
> 
> guard let x = try doSomething() catch { /// handle and return }
> guard let x = x else { /// handle and return }
> 
> 
> Also, if a function returns something like "Int??", a guard-let (or if-let) 
> on the returned value of that function will still bind an "Int?", thus 
> unwrapping only "one level" of optional. If-let and guard-let, as of today, 
> just unwrap a single optional level, an do not guaranteed at all that the 
> bound value is not optional.
> 
> To me guard-let (like if-let) is basically sugar for monadic binding for 
> Optionals, with the additional expressivity granted by the forced return. I 
> would love to see the same monadic binding structure applied to throwing 
> functions.
> 
> 
> 
> Elviro
> 
> 
> 
>> Il giorno 09 lug 2017, alle ore 01:16, Christopher Kornher via 
>> swift-evolution > > ha scritto:
>> 
>> Thanks for you considerate reply. My concern over the proliferation of 
>> “sugar proposals” is a general one. This proposal has more merit and general 
>> utiliity than many others. I have never used a throwing function in a guard 
>> statement that was not itself in a throwing function, but I can see that it 
>> could possibly be common in some code. Wrapping a guard statement and all 
>> the code that uses variables set in the guard in a do/catch is sub-optimal.
>> 
>>> On Jul 8, 2017, at 4:16 PM, Benjamin Spratling via swift-evolution 
>>> > wrote:
>>> 
>>> 
>>> 
>>> I’ve read your email, but haven’t digested it fully.  One thing I agree 
>>> with is that most 

Re: [swift-evolution] [Pitch] Guard/Catch

2017-07-10 Thread Elviro Rocca via swift-evolution
This is not a sugar proposal, in the same way as "guard" is not syntactic 
sugar, because it requires exiting the scope on the else branch, adding 
expressive power and safety to the call: also, the sugary part is pretty 
important because it avoids nested parentheses and very clearly states that if 
the guard condition is not fulfilled, the execution will not reach the next 
lines of code. Guard is useful to push the programmer to at least consider an 
early return instead of branching code paths, to achieve better clarity, 
readability and lower complexity, and I suspect is one of the best Swift 
features for many people.

Also, the case that the proposal aims to cover is not an edge case at all for a 
lot of people, including me. Rethrowing an error is something that I almost 
never do, and I consider the "umbrella" do/catch at the top of the call stack 
an anti-pattern, but I understand that many people like it and I'm not arguing 
against it. I am arguing in favor of having options and not pushing a 
particular style onto programmers, and for my (and many people's) style, a 
guard/catch with forced return is an excellent idea. In fact you seem to agree 
on the necessity of some kind of forced-returnish catch but your elaborations 
don't seem (to me) much better than the proposal itself.

Dave DeLong raised the point of weird behavior in the case of a function like:


func doSomething() throws → Result? { … }


In this case, what would the type of x be?


guard let x = try doSomething() catch { /// handle and return }


Simple, it would be Optional. I don't find this confusing at all, and 
if the idea that just by seeing "guard let" we should expect a non-Optional is 
somehow diffused, I think it's better to eradicate it.

First of all, if I'm returning an optional from a throwing function, it's 
probably the case that I want the Optional to be there in the returned value: 
the only reason why I would consider doing that is if the semantics of Optional 
are pretty meaningful in that case. For example, when parsing a JSON in which I 
expect a String or null to be at a certain key:


extension String: Error {}

func parseString(in dict: [String:Any], at key: String) throws -> String? {
guard let x = dict[key] else { throw "No value found at '\(key)' in 
\(dict)" }
if let x = x as? String { return x }
if let x = x as? NSNull { return nil }
throw "Value at '\(key)' in \(dict) is not 'string' or 'null"
}


Thus, if I'm returning an Optional from a throwing function it means that I 
want to clearly distinguish the two cases, so they shouldn't be collapsed in a 
single call:


guard let x = try doSomething() catch { /// handle and return }
guard let x = x else { /// handle and return }


Also, if a function returns something like "Int??", a guard-let (or if-let) on 
the returned value of that function will still bind an "Int?", thus unwrapping 
only "one level" of optional. If-let and guard-let, as of today, just unwrap a 
single optional level, an do not guaranteed at all that the bound value is not 
optional.

To me guard-let (like if-let) is basically sugar for monadic binding for 
Optionals, with the additional expressivity granted by the forced return. I 
would love to see the same monadic binding structure applied to throwing 
functions.



Elviro



> Il giorno 09 lug 2017, alle ore 01:16, Christopher Kornher via 
> swift-evolution  > ha scritto:
> 
> Thanks for you considerate reply. My concern over the proliferation of “sugar 
> proposals” is a general one. This proposal has more merit and general 
> utiliity than many others. I have never used a throwing function in a guard 
> statement that was not itself in a throwing function, but I can see that it 
> could possibly be common in some code. Wrapping a guard statement and all the 
> code that uses variables set in the guard in a do/catch is sub-optimal.
> 
>> On Jul 8, 2017, at 4:16 PM, Benjamin Spratling via swift-evolution 
>> > wrote:
>> 
>> 
>> 
>> I’ve read your email, but haven’t digested it fully.  One thing I agree with 
>> is that most functions which call throwing functions don’t actually use a 
>> do…catch block, but instead are merely marked “throws” and the error is 
>> propagated back through the stack.  Once I seriously started coding 
>> functions with errors, I realized I almost always wanted my errors to reach 
>> my view-controller or my business logic so I could present separate UI if a 
>> real error occurred, and often my error message depended on the details of 
>> the error instance.
>> 
>> 
>> 
>> I disagree with your conclusion on this point.
>> The “guard” syntax is specifically designed to achieve early return (and 
>> placing code associated with early return at the point where it happens) and 
>> cleanly installing the returned value into the surrounding scope.  So 

Re: [swift-evolution] [Pitch] Scoped @available

2017-07-10 Thread rintaro ishizaki via swift-evolution
2017-07-10 14:05 GMT+09:00 Xiaodi Wu :

> This would be very useful, but the spelling needs dramatic improvement.
>
> "Available unavailable" is already challenging to read, but at least it is
> learnable with time. The example where "@available(*, deprecated, access:
> internal) open" means "fileprivate" is entirely unreasonable.
>
>
I agree, but I couldn't come up with a better spelling.

// "deprecated" if the access is from "outside" of "fileprivate" scope.
@available(*, deprecated, outside: fileprivate) open

Hmm.. 樂

Any suggestions will be greatly appreciated!



> On Sun, Jul 9, 2017 at 22:40 rintaro ishizaki via swift-evolution <
> swift-evolution@swift.org> wrote:
>
>> Hi evolution community,
>>
>> I would like to propose "Scoped @available" attribute.
>>
>> What I want to achieve is to declare something that is publicly
>> unavailable, but still usable from narrower scope. A concrete example is
>> IndexableBase in the standard library:
>> https://github.com/apple/swift/blob/master/stdlib/public/cor
>> e/Collection.swift#L18-L20
>> Workaround for this problem in stdlib is to use typealias to underscored
>> declaration. However, we can't use this technique in our module because
>> underscored declarations are still visible and usable from outside.
>>
>> As a solution, I propose to add "access" parameter to @available
>> attribute, which limits the effect of @available attribute to the
>> specified value or outer.
>>
>> ---
>> // Module: Library
>>
>> /// This protocol is available for internal, but deprecated for public.
>> @available(*, deprecated, access: public, message: "it will be removed in
>> future")
>> public protocol OldProtocol {
>> /* ... */
>> }
>>
>> public Foo: OldProtocol { // No diagnostics
>> }
>>
>> ---
>> // Module: main
>>
>> import Library
>>
>> public Bar: OldProtocol { // warning: 'OldProtocol' is deprecated: it
>> will be removed in future
>> }
>>
>> ---
>>
>> I think this is useful when you want to stop exposing declarations, but
>> want to keep using them internally.
>>
>> More examples:
>>
>> // is `open`, going to be `filerprivate`
>> @available(*, deprecated, access: internal)
>> open class Foo {}
>>
>> // was `internal`, now `private`
>> @available(*, unavailable, access: fileprivate)
>> var value: Int
>>
>> // No effect (invisible from public anyway): emit a warning
>> @available(*, unavailable, access: public)
>> internal struct Foo {}
>>
>> What do you think?
>> ___
>> swift-evolution mailing list
>> swift-evolution@swift.org
>> https://lists.swift.org/mailman/listinfo/swift-evolution
>>
>
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution