Re: [swift-evolution] Enums and Source Compatibility

2017-09-09 Thread Chris Lattner via swift-evolution

> On Sep 6, 2017, at 10:37 PM, Rod Brown  wrote:
> 
>> We’ve talked about enums many times across years, and it seems like the 
>> appropriate model follows the generally understood resilience model.  
>> Specifically, there should be three different kinds of enums, and the kind 
>> should affect users outside their module in different ways:
>> 
>> 1. private/fileprivate/internal enum: cases can be added freely.  All 
>> clients are in the same module, so the enum is implicitly fragile, and all 
>> switches within the current module may therefore be exhaustive.
>> 
>> 2. public enum (i.e., one that isn’t marked fragile): cases may be added 
>> freely.  Within the module that defines the enum, switches may be 
>> exhaustive.  However, because the enum is public and non-fragile, clients 
>> outside the current module must be prepared for the enum to add additional 
>> cases in future revisions of the API, and therefore they cannot exhaustively 
>> match the cases of the enum.
>> 
>> 3. fragile public enum: cases may not be added, because that would break the 
>> fragility guarantee.  As such, clients within or outside of hte current 
>> module may exhaustively match against the enum.
>> 
>> 
>> This approach gives a very natural user model: app developers don’t have to 
>> care about enum resilience until they mark an enum as public, and even then 
>> they only have to care about it when/if they mark an enum as public.  This 
>> also builds on the notion of fragility - something we need for other nominal 
>> types like structs and classes - so it doesn’t introduce new language 
>> complexity.  Also such an approach is entirely source compatible with Swift 
>> 3/4, which require defaults (this isn’t an accident, it follows from the 
>> anticipated design).
>> 
>> This approach doesn’t address the problem of what to do with C though, 
>> because C doesn’t have a reasonable notion of “extensible” vs 
>> “nonextensible” enum.  As such, we definitely do need an attribute (or 
>> something) to add to Clang.  I think that your proposal for defaulting to 
>> “extensible” and using __attribute__((enum_extensibility(closed))) override 
>> this is perfectly sensible.
>> 
>> -Chris
>> 
> 
> Hi Chris,
> 
> I think I agree with you in general, with 1 exception:
> 
> I think the wording “fragile”, while technically correct, implies the exact 
> opposite of the promise contract, namely that it will not change between 
> releases of your framework. Perhaps a term like “concrete” would be more 
> appropriate? It would be fragile in that it is a fragile interface, but it 
> would be concrete as a promise to external dependencies. If you are 
> exhaustively enumerating, you’re basing it on the notion that it won’t 
> change, not that it’s "easy to break” (which fragile as a word would seem to 
> imply).

Hi Rod,

Just to clarify, I wasn’t intending to make a syntax proposal here.  I was 
talking about the semantic model that we should provide.  The bikeshed should 
be painted in a color that aligns best with the rest of the resilience model.

-Chris

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Proposal] Random Unification

2017-09-09 Thread Chris Lattner via swift-evolution

> On Sep 8, 2017, at 9:52 AM, Alejandro Alonso via swift-evolution 
>  wrote:
> 
> Hello swift evolution, I would like to propose a unified approach to 
> `random()` in Swift. I have a simple implementation here 
> https://gist.github.com/Azoy/5d294148c8b97d20b96ee64f434bb4f5 
> . This 
> implementation is a simple wrapper over existing random functions so existing 
> code bases will not be affected. Also, this approach introduces a new random 
> feature for Linux users that give them access to upper bounds, as well as a 
> lower bound for both Glibc and Darwin users. This change would be implemented 
> within Foundation.
> 
> I believe this simple change could have a very positive impact on new 
> developers learning Swift and experienced developers being able to write 
> single random declarations.
> 
> I’d like to hear about your ideas on this proposal, or any implementation 
> changes if need be.

My 2c:

- I’d love to see some random number initializers get added as initializers on 
the numeric types.  This is a long standing hole and it would be hugely 
valuable to fill it.
- I’d love to see several of the most common random kinds supported, and I 
agree it would be nice (but not required IMO) for the default to be 
cryptographically secure.
- We should avoid the temptation to nuke this mosquito with a heavy handed 
solution designed to solve all of the world’s problems: For example, the C++ 
random number stuff is crazily over-general.  The stdlib should aim to solve 
(e.g.) the top 3 most common cases, and let a more specialized external library 
solve the fully general problem (e.g. seed management, every distribution 
imaginable, etc).

In terms of approach, I’d suggest looking at other libraries that are 
conceptually similar, e.g. the “simple random data” APIs for numpy:
https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.random.html 


Things like “return a random number from 0 to 1.0, 0 to N, 0 to INTMAX, sample 
from a normal/gaussian distribution, and maybe one more should be enough.

-Chris


___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] pure functions

2017-09-09 Thread David Sweeris via swift-evolution

> On Sep 9, 2017, at 10:48 AM, Dave Abrahams via swift-evolution 
>  wrote:
> 
> on Wed Aug 23 2017, Joe Groff  wrote: 
 On Aug 18, 2017, at 12:10 PM, Chris Lattner via swift-evolution 
  wrote:  Splitting this out from the 
 concurrency thread:  
 
> On Aug 18, 2017, at 6:12 AM, Matthew Johnson  
> wrote: 
>> On Aug 17, 2017, at 11:53 PM, Chris Lattner  wrote: 
>>  
>> In the manifesto you talk about restrictions on passing functions across 
>> an actor message.  You didn’t discuss pure functions, presumably because 
>> Swift doesn’t have them yet. I imagine that if (hopefully when) Swift 
>> has compiler support for verifying pure functions these would also be 
>> safe to pass across an actor message.  Is that correct? 
> Correct.  The proposal is specifically/intentionally designed to be light 
> on type system additions, but there are many that could make it better in 
> various ways.  The logic for this approach is that I expect *a lot* of 
> people will be writing mostly straight-forward concurrent code, and that 
> goal is harmed by presenting significant type system hurdles for them to 
> jump over, because that implies a higher learning curve.   This is why 
> the proposal doesn’t focus on a provably memory safe system: If someone 
> slaps “ValueSemantical” on a type that doesn’t obey, they will break the 
> invariants of the system.  There are lots of ways to solve that problem 
> (e.g. the capabilities system in Pony) but it introduces a steep learning 
> curve.   I haven’t thought a lot about practically getting pure functions 
> into Swift, because it wasn’t clear what problems it would solve (which 
> couldn’t be solved another way).  You’re right though that this could be 
> an interesting motivator. 
 I can provide a concrete example of why this is definitely and important 
 motivator. My current project uses pure functions, value semantics and 
 declarative effects at the application level and moves as much of the 
 imperative code as possible (including effect handling) into library level 
 code. This is working out really well and I plan to continue with this 
 approach.  The library level code needs the ability to schedule user code 
 in the appropriate context.  There will likely be some declarative ability 
 for application level code to influence the context, priority, etc, but it 
 is the library that will be moving the functions to the final context.  
 They are obviously not closure literals from the perspective of the 
 library.   Pure functions are obviously important to the semantics of this 
 approach.  We can get by without compiler verification, using 
 documentation just as we do for protocol requirements that can't be 
 verified.  That said, it would be pretty disappointing to have to avoid 
 using actors in the implementation simply because we can't move pure 
 functions from one actor to another as necessary.   To be clear, I am 
 talking in the context of "the fullness of time".  It would be perfectly 
 acceptable to ship actors before pure functions. That said, I do think 
 it's crucial that we eventually have the ability to verify pure functions 
 and move them around at will. 
>>> Right.  Pure functions are also nice when you care about thread safety, and 
>>> there is a lot of work on this.  C has __attribute__((const)) and ((pure)) 
>>> for example, c++ has constexpr, and many research languages have built full 
>>> blown effects systems.   My principle concern is that things like this 
>>> quickly become infectious: LOTS of things are pure functions, and requiring 
>>> them all to be marked as such becomes a lot of boilerplate and conceptual 
>>> overhead.  This is happening in the C++ community with constexpr for 
>>> example. The secondary concern is that you need to build out the model 
>>> enough that you don’t prevent abstractions.  A pure function should be able 
>>> to create an instance of a struct, mutate it (i.e. calling non-pure 
>>> functions) etc.  This requires a non-trivial design, and as the design 
>>> complexity creeps, you run the risk of it getting out of control. 
>> Now that inout parameters are guaranteed exclusive, a mutating method on a 
>> struct or a function that takes inout parameters is isomorphic to one that 
>> consumes the initial value as a pure argument and returns the modified value 
>> back. This provides a value-semantics-friendly notion of purity, where a 
>> function can still be considered pure if the only thing it mutates is its 
>> unescaped local state and its inout parameters and it doesn't read or write 
>> any shared mutable state such as mutable globals, instance properties, or 
>> escaped variables. 

Re: [swift-evolution] Enums and Source Compatibility

2017-09-09 Thread Rod Brown via swift-evolution
Jordan,

Do you have any other thoughts about the ongoing discussion here, especially 
regarding Chris’ comments? As you’re the one pushing this forward, I’d really 
like to know what your thoughts are regarding this?

- Rod
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-09 Thread Wallacy via swift-evolution
This is the only part of the proposal that i can't concur!

^async^ at call side solve this nicely! And Pierre also showed how common
people are doing it wrong! And will make this wrong using Futures too.

func doit() async {
let dataResource = async loadWebResource("dataprofile.txt”)
let imageResource = async loadWebResource("imagedata.dat”)
let imageTmp = await decodeImage(dataResource, imageResource)
self.imageResult = await dewarpAndCleanupImage(imageTmp)
}

Anyway, we have time to think about it.


Em sáb, 9 de set de 2017 às 20:30, David Hart via swift-evolution <
swift-evolution@swift.org> escreveu:

> On 10 Sep 2017, at 00:40, Kenny Leung via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> Then isn’t the example functionally equivalent to:
>
> func doit() {
> DispatchQueue.global().async {
> let dataResource  = loadWebResource("dataprofile.txt")
> let imageResource = loadWebResource("imagedata.dat")
> let imageTmp  = decodeImage(dataResource, imageResource)
> let imageResult   = dewarpAndCleanupImage(imageTmp)
> DispatchQueue.main.async {
> self.imageResult = imageResult
> }
> }
> }
>
> if all of the API were synchronous? Why wouldn’t we just exhort people to
> write synchronous API code and continue using libdispatch? What am I
> missing?
>
>
> There are probably very good optimisations for going asynchronous, but I’m
> not the right person for that part of the answer.
>
> But I can give another answer: once we have an async/await pattern, we can
> build Futures/Promises on top of them and then we can await on multiple
> asynchronous calls in parallel. But it won’t be a feature of async/await in
> itself:
>
> func doit() async {
> let dataResource  = Future({ loadWebResource("dataprofile.txt”) })
> let imageResource = Future({ loadWebResource("imagedata.dat”) })
> let imageTmp = await decodeImage(dataResource.get, imageResource.get)
> self.imageResult = await dewarpAndCleanupImage(imageTmp)
> }
>
> -Kenny
>
>
> On Sep 8, 2017, at 2:33 PM, David Hart  wrote:
>
>
> On 8 Sep 2017, at 20:34, Kenny Leung via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> Hi All.
>
> A point of clarification in this example:
>
> func loadWebResource(_ path: String) async -> Resourcefunc decodeImage(_ r1: 
> Resource, _ r2: Resource) async -> Imagefunc dewarpAndCleanupImage(_ i : 
> Image) async -> Image
> func processImageData1() async -> Image {
> let dataResource  = await loadWebResource("dataprofile.txt")
> let imageResource = await loadWebResource("imagedata.dat")
> let imageTmp  = await decodeImage(dataResource, imageResource)
> let imageResult   = await dewarpAndCleanupImage(imageTmp)
> return imageResult
> }
>
>
> Do these:
>
> await loadWebResource("dataprofile.txt")
>
> await loadWebResource("imagedata.dat")
>
>
> happen in in parallel?
>
>
> They don’t happen in parallel.
>
> If so, how can I make the second one wait on the first one? If not, how
> can I make them go in parallel?
>
> Thanks!
>
> -Kenny
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
>
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-09 Thread David Hart via swift-evolution

> On 10 Sep 2017, at 00:40, Kenny Leung via swift-evolution 
>  wrote:
> 
> Then isn’t the example functionally equivalent to:
> 
> func doit() {
> DispatchQueue.global().async {
> let dataResource  = loadWebResource("dataprofile.txt")
> let imageResource = loadWebResource("imagedata.dat")
> let imageTmp  = decodeImage(dataResource, imageResource)
> let imageResult   = dewarpAndCleanupImage(imageTmp)
> DispatchQueue.main.async {
> self.imageResult = imageResult
> }
> }
> }
> 
> if all of the API were synchronous? Why wouldn’t we just exhort people to 
> write synchronous API code and continue using libdispatch? What am I missing?

There are probably very good optimisations for going asynchronous, but I’m not 
the right person for that part of the answer.

But I can give another answer: once we have an async/await pattern, we can 
build Futures/Promises on top of them and then we can await on multiple 
asynchronous calls in parallel. But it won’t be a feature of async/await in 
itself:

func doit() async {
let dataResource  = Future({ loadWebResource("dataprofile.txt”) })
let imageResource = Future({ loadWebResource("imagedata.dat”) })
let imageTmp = await decodeImage(dataResource.get, imageResource.get)
self.imageResult = await dewarpAndCleanupImage(imageTmp)
}

> -Kenny
> 
> 
>> On Sep 8, 2017, at 2:33 PM, David Hart > > wrote:
>> 
>> 
>>> On 8 Sep 2017, at 20:34, Kenny Leung via swift-evolution 
>>> > wrote:
>>> 
>>> Hi All.
>>> 
>>> A point of clarification in this example:
>>> 
>>> func loadWebResource(_ path: String) async -> Resource
>>> func decodeImage(_ r1: Resource, _ r2: Resource) async -> Image
>>> func dewarpAndCleanupImage(_ i : Image) async -> Image
>>> 
>>> func processImageData1() async -> Image {
>>> let dataResource  = await loadWebResource("dataprofile.txt")
>>> let imageResource = await loadWebResource("imagedata.dat")
>>> let imageTmp  = await decodeImage(dataResource, imageResource)
>>> let imageResult   = await dewarpAndCleanupImage(imageTmp)
>>> return imageResult
>>> }
>>> 
>>> Do these:
>>> 
>>> await loadWebResource("dataprofile.txt")
>>> await loadWebResource("imagedata.dat")
>>> 
>>> happen in in parallel?
>> 
>> They don’t happen in parallel.
>> 
>>> If so, how can I make the second one wait on the first one? If not, how can 
>>> I make them go in parallel?
>>> 
>>> Thanks!
>>> 
>>> -Kenny
>>> 
>>> ___
>>> swift-evolution mailing list
>>> swift-evolution@swift.org 
>>> https://lists.swift.org/mailman/listinfo/swift-evolution 
>>> 
>> 
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-09 Thread Kenny Leung via swift-evolution
Then isn’t the example functionally equivalent to:

func doit() {
DispatchQueue.global().async {
let dataResource  = loadWebResource("dataprofile.txt")
let imageResource = loadWebResource("imagedata.dat")
let imageTmp  = decodeImage(dataResource, imageResource)
let imageResult   = dewarpAndCleanupImage(imageTmp)
DispatchQueue.main.async {
self.imageResult = imageResult
}
}
}

if all of the API were synchronous? Why wouldn’t we just exhort people to write 
synchronous API code and continue using libdispatch? What am I missing?

-Kenny


> On Sep 8, 2017, at 2:33 PM, David Hart  wrote:
> 
> 
>> On 8 Sep 2017, at 20:34, Kenny Leung via swift-evolution 
>> > wrote:
>> 
>> Hi All.
>> 
>> A point of clarification in this example:
>> 
>> func loadWebResource(_ path: String) async -> Resource
>> func decodeImage(_ r1: Resource, _ r2: Resource) async -> Image
>> func dewarpAndCleanupImage(_ i : Image) async -> Image
>> 
>> func processImageData1() async -> Image {
>> let dataResource  = await loadWebResource("dataprofile.txt")
>> let imageResource = await loadWebResource("imagedata.dat")
>> let imageTmp  = await decodeImage(dataResource, imageResource)
>> let imageResult   = await dewarpAndCleanupImage(imageTmp)
>> return imageResult
>> }
>> 
>> Do these:
>> 
>> await loadWebResource("dataprofile.txt")
>> await loadWebResource("imagedata.dat")
>> 
>> happen in in parallel?
> 
> They don’t happen in parallel.
> 
>> If so, how can I make the second one wait on the first one? If not, how can 
>> I make them go in parallel?
>> 
>> Thanks!
>> 
>> -Kenny
>> 
>> ___
>> swift-evolution mailing list
>> swift-evolution@swift.org 
>> https://lists.swift.org/mailman/listinfo/swift-evolution
> 

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [swift-evolution-announce] [Review] SE-0184: Unsafe[Mutable][Raw][Buffer]Pointer: add missing methods, adjust existing labels for clarity, and remove deallocation size

2017-09-09 Thread Andrew Trick via swift-evolution

> On Sep 9, 2017, at 8:37 AM, Andrew Trick  wrote:
> 
> 
>> On Sep 9, 2017, at 3:15 AM, Jean-Daniel  wrote:
>> 
>> 
>>> Le 8 sept. 2017 à 03:03, Andrew Trick via swift-evolution 
>>>  a écrit :
>>> 
>>> 
 On Sep 7, 2017, at 5:37 PM, Joe Groff  wrote:
> 
> The important thing is that the UnsafeBufferPointer API is clearly 
> documented. We do not want users to think it’s ok to deallocate a smaller 
> buffer than they allocated.
> 
> Unfortunately, there’s actually no way to assert this in the runtime 
> because malloc_size could be larger than the allocated capacity. 
> Incorrect code could happen to work and we can live with that.
 
 Would it be sufficient to assert that malloc_good_size(passedCapacity) == 
 malloc_size(base) ? It wouldn't be perfect but could still catch a lot of 
 misuses.
>>> 
>>> That theory does hold up for a million random values, but I don’t know if 
>>> we can rely on malloc_size never being larger than roundUp(sz, 16). Greg?
>> 
>> You can’t. This may be true while alloc size if less than a page, but a 
>> quick test show that:
>> 
>> malloc_size(malloc(4097)) = 4608
> 
> Thanks, I was being a bit silly...
> We also have malloc_good_size(4097) = 4608.
> 
> What I was getting at is, can malloc_good_size be “dumb” for any legal 
> implementation of malloc zones?
> 
> Or can we assert malloc_good_size(x) == malloc_size(malloc(x)?
> 
> -Andy

Answer:
- this assumption is obviously dependent on a particular implementation of libc.
- users implement their malloc zone however they like (although Swift doesn't 
strictly *need* to be compatible with them).
- even current implementations of libc have various operating modes that could 
violate the assertion, you would need to guard the check
  with a runtime condition.

-Andy
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Proposal] Random Unification

2017-09-09 Thread Brent Royal-Gordon via swift-evolution
> On Sep 9, 2017, at 12:03 PM, Taylor Swift via swift-evolution 
>  wrote:
> 
> I would argue that anyone doing cryptography probably already knows how 
> important RNG selection is and can be expected to look for a specialized 
> cryptographically secure RNG. I doubt they would just use the default RNG 
> without first checking the documentation.

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=Random

Software engineers are *so* bad at this.

-- 
Brent Royal-Gordon
Architechies

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Proposal] Explicit Synthetic Behaviour

2017-09-09 Thread Gwendal Roué via swift-evolution
All right, I'll be more positive: our science, IT, is a *constructive* science, 
by *essence*. If there is a problem, there must be a way to show it.

It you can't, then there is no problem.

Gwendal

> Le 9 sept. 2017 à 15:26, Gwendal Roué  a écrit :
> 
> Hello Haravikk,
> 
> I'lm worried that you fail at preventing a real problem. May I suggest a 
> change in your strategy?
> 
> Sometimes, sample code greatly helps turning subtle ideas into blatant 
> evidence. After all, subtleties are all about corner cases, and corner cases 
> are the blind spots of imagination. What about giving that little something 
> that would help your readers grasp your arguments?
> 
> I don't quite know what example you will provide, but I could suggest the 
> exhibition of a practical problem with Equatable synthesis. We'll know better 
> if the problem can arise in the Standard lib, in third-party libraries, at 
> application level, or at several scales at the same time. It would also be 
> nice to see your solution to the problem, that is to say an alternative that 
> still provides code synthesis for developers that want to opt in the feature, 
> but avoids the caveat of the initial example. I hope this would greatly help 
> the discussion move forward.
> 
> Last general comment about the topic: if Haravikk is right, and that code 
> synthesis should indeed be explicit, then that wouldn't be such a shame.
> 
> My two cents,
> Gwendal Roué
> 
> 
>> Le 9 sept. 2017 à 13:41, Haravikk via swift-evolution 
>> > a écrit :
>> 
>>> 
>>> On 9 Sep 2017, at 09:33, Xiaodi Wu >> > wrote:
>>> 
>>> 
>>> On Sat, Sep 9, 2017 at 02:47 Haravikk via swift-evolution 
>>> > wrote:
>>> 
 On 9 Sep 2017, at 02:02, Xiaodi Wu > wrote:
 
 On Fri, Sep 8, 2017 at 4:00 PM, Itai Ferber via swift-evolution 
 > wrote:
 
 
> On Sep 8, 2017, at 12:46 AM, Haravikk via swift-evolution 
> > wrote:
> 
> 
>> On 7 Sep 2017, at 22:02, Itai Ferber > > wrote:
>> 
>> protocol Fooable : Equatable { // Equatable is just a simple example
>> var myFoo: Int { get }
>> }
>> 
>> extension Fooable {
>> static func ==(_ lhs: Self, _ rhs: Self) -> Bool {
>> return lhs.myFoo == rhs.myFoo
>> }
>> }
>> 
>> struct X : Fooable {
>> let myFoo: Int
>> let myName: String
>> // Whoops, forgot to give an implementation of ==
>> }
>> 
>> print(X(myFoo: 42, myName: "Alice") == X(myFoo: 42, myName: "Bob")) // 
>> true
>> This property is necessary, but not sufficient to provide a correct 
>> implementation. A default implementation might be able to assume 
>> something about the types that it defines, but it does not necessarily 
>> know enough.
> 
> Sorry but that's a bit of a contrived example; in this case the protocol 
> should not implement the equality operator if more information may be 
> required to define equality. It should only be implemented if the 
> protocol is absolutely clear that .myFoo is the only part of a Fooable 
> that can or should be compared as equatable, e.g- if a Fooable is a 
> database record and .myFoo is a primary key, the data could differ but it 
> would still be a reference to the same record.
> 
> To be clear, I'm not arguing that someone can't create a regular default 
> implementation that also makes flawed assumptions, but that 
> synthesised/reflective implementations by their very nature have to, as 
> they cannot under every circumstance guarantee correctness when using 
> parts of a concrete type that they know nothing about.
 
 You can’t argue this both ways:
 If you’re arguing this on principle, that in order for synthesized 
 implementations to be correct, they must be able to — under every 
 circumstance — guarantee correctness, then you have to apply the same 
 reasoning to default protocol implementations. Given a default protocol 
 implementation, it is possible to come up with a (no matter how contrived) 
 case where the default implementation is wrong. Since you’re arguing this 
 on principle, you cannot reject contrived examples.
 If you are arguing this in practice, then you’re going to have to back up 
 your argument with evidence that synthesized examples are more often wrong 
 than default implementations. You can’t declare that synthesized 
 implementations are by nature incorrect but allow default implementations 
 to slide because in 

Re: [swift-evolution] [Proposal] Random Unification

2017-09-09 Thread Jean-Daniel via swift-evolution

> Le 9 sept. 2017 à 21:03, Taylor Swift via swift-evolution 
>  a écrit :
> 
> 
> 
> On Fri, Sep 8, 2017 at 8:07 PM, Xiaodi Wu via swift-evolution 
> > wrote:
> On Fri, Sep 8, 2017 at 7:50 PM, Stephen Canon  > wrote:
>> On Sep 8, 2017, at 8:09 PM, Xiaodi Wu via swift-evolution 
>> > wrote:
>> 
>> This topic has been broached on Swift Evolution previously. It's interesting 
>> to me that Steve Canon is so certain that CSPRNGs are the way to go. I 
>> wasn't aware that hardware CSPRNGs have come such a long way and are so 
>> ubiquitous as to be feasible as a basis for Swift random numbers. If so, 
>> great.
>> 
>> Otherwise, if there is any way that a software, non-cryptographically secure 
>> PRNG is going to outperform a CSPRNG, then I think it's worthwhile to have a 
>> (carefully documented) choice between the two. I would imagine that for many 
>> uses, such as an animation in which you need a plausible source of noise to 
>> render a flame, whether that is cryptographically secure or not is 
>> absolutely irrelevant but performance may be key.
> 
> Let me be precise: it is absolutely possible to outperform CSPRNGs. They have 
> simply become fast enough that the performance gap doesn’t matter for most 
> uses (let’s say amortized ten cycles per byte or less—whatever you are going 
> to do with the random bitstream will be much more expensive than getting the 
> bits was).
> 
> That said, yes, there should definitely be other options. It should be 
> possible for users to get reproducible results from a stdlib random interface 
> run-to-run, and also across platforms. That alone requires that at least one 
> other option for a generator be present. There may also be a place for a very 
> high-throughput generator like xorshiro128+.
> 
> All I’m really saying is that the *default* generator should be an 
> os-provided unseeded CSPRNG, and we should be very careful about documenting 
> any generator options.
> 
> 
> Agree on all points. Much like Swift's strings are Unicode-correct instead of 
> the fastest possible way of slicing and dicing sequences of ASCII characters, 
> Swift's default PRNG should be cryptographically secure.
> 
> 
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org 
> https://lists.swift.org/mailman/listinfo/swift-evolution 
> 
> 
> I would argue that anyone doing cryptography probably already knows how 
> important RNG selection is

If it where the case, why is there so many security issues due to poor choice 
of random source ?

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Proposal] Random Unification

2017-09-09 Thread Jonathan Hull via swift-evolution
Here is my wishlist:

• A protocol which allows random instances of any type which conforms (they 
would init from some number of random bytes passed to them… or better yet, they 
would be passed a source which can give them however many bytes of randomness 
they need)
• A choice between fast and secure randomness (and possibly other 
implementations)
• The optional ability to seed so that randomness can be reproduced when needed

It would also be nice to be able to constrain the created instances in some 
way.  For scalars like Float and Int, this would mean being able to define a 
range which the random value falls in.  For dimensional constructs (e.g. 
colors, points, sizes) you want to be able to constrain each axis individually. 
 This is actually the hardest part to get right, especially if you are trying 
to keep things secure.  Even for graphics applications where security isn’t the 
issue, biases in the randomness added by naive approaches like using % can be 
apparent to the eye.

I think what I would like to see is a protocol defining a source of randomness, 
which does that hard part for you (written in mail):

protocol RandomnessSource {
init(seed: UInt64)
func randomBytes(count: Int) -> [UInt64]

//Default Imp of below provided from above
func randomInt(in: ClosedRange? = nil) -> Int  //nil means don’t 
constrain ( = nil is shorthand for convenience func randomInt() )
func randomUInt(in: ClosedRange? = nil) -> UInt 
func randomDouble(in: ClosedRange? = nil) -> Double 
func randomBool() -> Bool
}

…and then a protocol where a type can use that source to create random 
instances of themselves:

protocol Randomizable {
static func random(_ source: RandomnessSource)
static var fastRandom:Self {get}  //Uses default “fast” source (default 
imp provided)
static var secureRandom:Self {get} //Uses default “secure” source 
(default imp provided)
}

protocol RangeRandomizable: Randomizable {
static func random(_ source: RandomnessSource, in: ClosedRange?)
static func fastRandom(in: ClosedRange?) -> Self  //Uses default 
“fast” source (default imp provided)
static func secureRandom(in: ClosedRange?) -> Self //Uses default 
“secure” source (default imp provided)

//Also provides default imp for random(_ source: RandomnessSource) 
{random(source, in: nil)}, etc...
}

Then things like points/sizes can use scalar types as building blocks:

extension CGFloat: RangeRandomizable {
static func random(_ source: RandomnessSource, in:ClosedRange){
let doubleRange = //Convert range to double
return CGFloat(source.randomDouble(in: doubleRange))
}
}

extension CGSize: Randomizable {
static func random(source: RandomnessSource){
return CGSize(width: CGFloat.random(source), height: 
CGFloat.random(source))
}

//Can add convenience functions to bound width/height to range/constant
}


What I do behind the scenes in my own protocol is take a dictionary of named 
constraints (instead of a range) because I need a common interface for all 
dimensions:

public enum RandomSourceConstraint {
case none
case constant(T)
case min(T)
case max(T)
case range (T,T)
case custom ( (RandomSourceValue)->T )

//More code here...
}

Thanks,
Jon


> On Sep 8, 2017, at 3:40 PM, Jonathan Hull via swift-evolution 
>  wrote:
> 
> Here is some Swift 3 code that allows simple repeatable repeatable random 
> sequences by conforming to a protocol:
> https://gist.github.com/jonhull/3655672529f8cf5b2eb248583d2cafb9 
> 
>  
> I now use a slightly more complicated version of this which allows more 
> complex types (like colors) to be added and constrained in more interesting 
> ways than just from…to (e.g. Colors with a fixed lightness/brightness, but a 
> range in hue). The base is pretty much the same.
> 
> I have found that user-facing code often needs the idea of 
> repeatable/re-creatable randomness. I originally created it to create a 
> sketchy version of lines that had random offsets added to points along the 
> line. If the randomness wasn’t reproducible, then the sketchiness of the line 
> would shift around randomly every time there was a change.
> 
> The code I have provided above is not useful for any sort of 
> cryptography/security though.  There are different reasons for randomness. 
> Maybe that is something we should consider?
> 
> Thanks,
> Jon
> 
>> On Sep 8, 2017, at 9:52 AM, Alejandro Alonso via swift-evolution 
>> > wrote:
>> 
>> Hello swift evolution, I would like to propose a unified approach to 
>> `random()` in Swift. I have a simple implementation here 
>> https://gist.github.com/Azoy/5d294148c8b97d20b96ee64f434bb4f5 
>> 

Re: [swift-evolution] [Pitch] Synthesized static enum property to iterate over cases

2017-09-09 Thread Tony Allevato via swift-evolution
On Fri, Sep 8, 2017 at 5:14 PM Xiaodi Wu  wrote:

> On Fri, Sep 8, 2017 at 4:08 PM, Matthew Johnson via swift-evolution <
> swift-evolution@swift.org> wrote:
>
>>
>> On Sep 8, 2017, at 12:05 PM, Tony Allevato 
>> wrote:
>>
>>
>>
>> On Fri, Sep 8, 2017 at 9:44 AM Matthew Johnson 
>> wrote:
>>
>>> On Sep 8, 2017, at 11:32 AM, Tony Allevato 
>>> wrote:
>>>
>>>
>>>
>>> On Fri, Sep 8, 2017 at 8:35 AM Matthew Johnson 
>>> wrote:
>>>
 On Sep 8, 2017, at 9:53 AM, Tony Allevato via swift-evolution <
 swift-evolution@swift.org> wrote:

 Thanks for bringing this up, Logan! It's something I've been thinking
 about a lot lately after a conversation with some colleagues outside of
 this community. Some of my thoughts:

 AFAIK, there are two major use cases here: (1) you need the whole
 collection of cases, like in your example, and (2) you just need the number
 of cases. The latter seems to occur somewhat commonly when people want to
 use an enum to define the sections of, say, a UITableView. They just return
 the count from numberOfSections(in:) and then switch over the cases in
 their cell-providing methods.

 Because of #2, it would be nice to avoid instantiating the collection
 eagerly. (Also because of examples like Jonathan's, where the enum is
 large.) If all the user is ever really doing is iterating over them,
 there's no need to keep the entire collection in memory. This leads us to
 look at Sequence; we could use something like AnySequence to keep the
 current case as our state and a transition function to advance to the next
 one. If a user needs to instantiate the full array from that sequence they
 can do so, but they have to do it explicitly.

 The catch is that Sequence only provides `underestimatedCount`, rather
 than `count`. Calling the former would be an awkward API (why is it
 underestimated? we know how many cases there are). I suppose we could
 create a concrete wrapper for Sequence (PrecountedSequence?) that provides
 a `count` property to make that cleaner, and then have
 `underestimatedCount` return the same thing if users passed this thing into
 a generic operation constrained over Sequence. (The standard library
 already has support wrappers like EnumeratedSequence, so maybe this is
 appropriate.)

 Another question that would need to be answered is, how should the
 cases be ordered? Declaration order seems obvious and straightforward, but
 if you have a raw-value enum (say, integers), you could have the
 declaration order and the numeric order differ. Maybe that's not a problem.
 Tying the iteration order to declaration order also means that the behavior
 of a program could change simply by reördering the cases. Maybe that's not
 a big problem either, but it's something to call out.

 If I were designing this, I'd start with the following approach. First,
 add a new protocol to the standard library:

 ```
 public protocol ValueEnumerable {
   associatedtype AllValuesSequence: Sequence where
 AllValuesSequence.Iterator.Element == Self

   static var allValues: AllValuesSequence { get }
 }
 ```

 Then, for enums that declare conformance to that protocol, synthesize
 the body of `allValues` to return an appropriate sequence. If we imagine a
 model like AnySequence, then the "state" can be the current case, and the
 transition function can be a switch/case that returns it and advances to
 the next one (finally returning nil).

 There's an opportunity for optimization that may or may not be worth
 it: if the enum is RawRepresentable with RawValue == Int, AND all the raw
 values are in a contiguous range, AND declaration order is numerical order
 (assuming we kept that constraint), then the synthesized state machine can
 just be a simple integer incrementation and call to `init?(rawValue:)`.
 When all the cases have been generated, that will return nil on its own.

 So that covers enums without associated values. What about those with
 associated values? I would argue that the "number of cases" isn't something
 that's very useful here—if we consider that enum cases are really factory
 functions for concrete values of the type, then we shouldn't think about
 "what are all the cases of this enum" but "what are all the values of this
 type". (For enums without associated values, those are synonymous.)

 An enum with associated values can potentially have an infinite number
 of values. Here's one:

 ```
 enum BinaryTree {
   case subtree(left: BinaryTree, right: BinaryTree)
   case leaf
   case empty
 }
 ```

 Even without introducing an Element type 

Re: [swift-evolution] [Proposal] Random Unification

2017-09-09 Thread Taylor Swift via swift-evolution
On Fri, Sep 8, 2017 at 8:07 PM, Xiaodi Wu via swift-evolution <
swift-evolution@swift.org> wrote:

> On Fri, Sep 8, 2017 at 7:50 PM, Stephen Canon  wrote:
>
>> On Sep 8, 2017, at 8:09 PM, Xiaodi Wu via swift-evolution <
>> swift-evolution@swift.org> wrote:
>>
>>
>> This topic has been broached on Swift Evolution previously. It's
>> interesting to me that Steve Canon is so certain that CSPRNGs are the way
>> to go. I wasn't aware that hardware CSPRNGs have come such a long way and
>> are so ubiquitous as to be feasible as a basis for Swift random numbers. If
>> so, great.
>>
>> Otherwise, if there is any way that a software, non-cryptographically
>> secure PRNG is going to outperform a CSPRNG, then I think it's worthwhile
>> to have a (carefully documented) choice between the two. I would imagine
>> that for many uses, such as an animation in which you need a plausible
>> source of noise to render a flame, whether that is cryptographically secure
>> or not is absolutely irrelevant but performance may be key.
>>
>>
>> Let me be precise: it is absolutely possible to outperform CSPRNGs. They
>> have simply become fast enough that the performance gap doesn’t matter for
>> most uses (let’s say amortized ten cycles per byte or less—whatever you are
>> going to do with the random bitstream will be much more expensive than
>> getting the bits was).
>>
>> That said, yes, there should definitely be other options. It should be
>> possible for users to get reproducible results from a stdlib random
>> interface run-to-run, and also across platforms. That alone requires that
>> at least one other option for a generator be present. There may also be a
>> place for a very high-throughput generator like xorshiro128+.
>>
>> All I’m really saying is that the *default* generator should be an
>> os-provided unseeded CSPRNG, and we should be very careful about
>> documenting any generator options.
>>
>
>
> Agree on all points. Much like Swift's strings are Unicode-correct instead
> of the fastest possible way of slicing and dicing sequences of ASCII
> characters, Swift's default PRNG should be cryptographically secure.
>
>
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
> I would argue that anyone doing cryptography probably already knows how
important RNG selection is and can be expected to look for a specialized
cryptographically secure RNG. I doubt they would just use the default RNG
without first checking the documentation.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Pitch] Synthesized static enum property to iterate over cases

2017-09-09 Thread Tony Allevato via swift-evolution
On Sat, Sep 9, 2017 at 11:36 AM Matthew Johnson via swift-evolution <
swift-evolution@swift.org> wrote:

>
>
> Sent from my iPad
>
> On Sep 9, 2017, at 11:42 AM, gs.  wrote:
>
> How does fragility play into this? Does this only work for fragile
> (closed) and internal/private/fileprivate enums?
>
>
> That's a great question.  I think it would have to have that limitation.
> Using Jordan's terminology, by definition a nonexhaustive cannot provide a
> complete list of all values.
>

This one is tougher for me to make a call. I definitely see the point of
view that says that if a nonexhaustive enum doesn't provide a complete
list, then it would make sense to not synthesize it. On the other hand,
some nonexhaustive enums may still benefit from that. For example, I've
been tinkering with wrapping the ICU APIs in Swift, and I have an enum for
Unicode code blocks
.
That would be a good candidate for a nonexhaustive enum because the spec is
always growing, but it would still be very useful to have the compiler
synthesize the collection and count for me (for example, to display in a
table), especially since it is large.



>
>
> TJ
>
>
> On Sep 9, 2017, at 15:23, Matthew Johnson via swift-evolution <
> swift-evolution@swift.org> wrote:
>
>
>
> Sent from my iPad
>
> On Sep 9, 2017, at 7:33 AM, Brent Royal-Gordon 
> wrote:
>
> On Sep 8, 2017, at 5:14 PM, Xiaodi Wu via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> Here, people just want an array of all cases. Give them an array of all
> cases. When it's not possible (i.e., in the case of cases with associated
> values), don't do it.
>
>
> I agree it should be Int-indexed; that seems to be what people want from
> this.
>
> I seem to recall that there is information about the available enum cases
> in the module metadata. If so, and if we're willing to lock that in as part
> of the ABI design, I think we should write—or at least allow for—a custom
> Int-indexed collection, because this may allow us to recurse into
> associated value types. If we aren't going to have suitable metadata,
> though, I agree we should just use an Array. There are pathological cases
> where instantiating a large Array might be burdensome, but sometimes you
> just have to ignore the pathological cases.
>
> (The "infinite recursion" problem with associated values is actually
> relatively easy to solve, by the way: Don't allow, or at least don't
> generate, `ValuesEnumerable` conformance on enums with `indirect` cases.)
>
>
> This is the direction I think makes the most sense in terms of how we
> should approach synthesis.  The open question in my mind is what the exact
> requirement of the protocol should be.  Should it exactly match what we
> synthesize (`[Self]` or an associated `Collection where Iterator.Element ==
> Self, Index == Int`) or whether the protocol should have a more relaxed
> requirement of `Sequence where Iterator.Element == Self` like Tony proposed.
>
>
> --
> Brent Royal-Gordon
> Architechies
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Pitch] Synthesized static enum property to iterate over cases

2017-09-09 Thread Christopher Kornher via swift-evolution

> On Sep 9, 2017, at 12:36 PM, Matthew Johnson via swift-evolution 
>  wrote:
> 
> 
> 
> Sent from my iPad
> 
> On Sep 9, 2017, at 11:42 AM, gs.  > wrote:
> 
>> How does fragility play into this? Does this only work for fragile (closed) 
>> and internal/private/fileprivate enums?
> 
> That's a great question.  I think it would have to have that limitation.  
> Using Jordan's terminology, by definition a nonexhaustive cannot provide a 
> complete list of all values.

The runtime “knows” (or could be made to know) all the cases at any given 
moment in time (ignoring runtime-loaded modules, should they ever be 
supported). This is actually a strong argument for the creation of this 
feature. It would be impossible for such a list to be maintained manually. 
Making the list available somehow at compile time would almost guarantee a 
source-breaking/ABI-breaking change in the future.  This raises a question: 
would models want anything other than the complete list of cases at runtime? 
For example, the module containing the root enum may have a use for the cases 
just defined within that module. I propose that the feature be defined to 
include all cases at runtime and that discussions of partial lists of cases be 
deferred until a use is found for them.

> 
>> 
>> TJ 
>> 
>> On Sep 9, 2017, at 15:23, Matthew Johnson via swift-evolution 
>> > wrote:
>> 
>>> 
>>> 
>>> Sent from my iPad
>>> 
>>> On Sep 9, 2017, at 7:33 AM, Brent Royal-Gordon >> > wrote:
>>> 
> On Sep 8, 2017, at 5:14 PM, Xiaodi Wu via swift-evolution 
> > wrote:
> 
> Here, people just want an array of all cases. Give them an array of all 
> cases. When it's not possible (i.e., in the case of cases with associated 
> values), don't do it.
 
 
 I agree it should be Int-indexed; that seems to be what people want from 
 this.
 
 I seem to recall that there is information about the available enum cases 
 in the module metadata. If so, and if we're willing to lock that in as 
 part of the ABI design, I think we should write—or at least allow for—a 
 custom Int-indexed collection, because this may allow us to recurse into 
 associated value types. If we aren't going to have suitable metadata, 
 though, I agree we should just use an Array. There are pathological cases 
 where instantiating a large Array might be burdensome, but sometimes you 
 just have to ignore the pathological cases.
 
 (The "infinite recursion" problem with associated values is actually 
 relatively easy to solve, by the way: Don't allow, or at least don't 
 generate, `ValuesEnumerable` conformance on enums with `indirect` cases.)
>>> 
>>> This is the direction I think makes the most sense in terms of how we 
>>> should approach synthesis.  The open question in my mind is what the exact 
>>> requirement of the protocol should be.  Should it exactly match what we 
>>> synthesize (`[Self]` or an associated `Collection where Iterator.Element == 
>>> Self, Index == Int`) or whether the protocol should have a more relaxed 
>>> requirement of `Sequence where Iterator.Element == Self` like Tony proposed.
>>> 
 
 -- 
 Brent Royal-Gordon
 Architechies
 
>>> ___
>>> swift-evolution mailing list
>>> swift-evolution@swift.org 
>>> https://lists.swift.org/mailman/listinfo/swift-evolution 
>>> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Pitch] Synthesized static enum property to iterate over cases

2017-09-09 Thread Matthew Johnson via swift-evolution


Sent from my iPad

> On Sep 9, 2017, at 11:42 AM, gs.  wrote:
> 
> How does fragility play into this? Does this only work for fragile (closed) 
> and internal/private/fileprivate enums?

That's a great question.  I think it would have to have that limitation.  Using 
Jordan's terminology, by definition a nonexhaustive cannot provide a complete 
list of all values.

> 
> TJ 
> 
>> On Sep 9, 2017, at 15:23, Matthew Johnson via swift-evolution 
>>  wrote:
>> 
>> 
>> 
>> Sent from my iPad
>> 
>>> On Sep 9, 2017, at 7:33 AM, Brent Royal-Gordon  
>>> wrote:
>>> 
 On Sep 8, 2017, at 5:14 PM, Xiaodi Wu via swift-evolution 
  wrote:
 
 Here, people just want an array of all cases. Give them an array of all 
 cases. When it's not possible (i.e., in the case of cases with associated 
 values), don't do it.
>>> 
>>> 
>>> I agree it should be Int-indexed; that seems to be what people want from 
>>> this.
>>> 
>>> I seem to recall that there is information about the available enum cases 
>>> in the module metadata. If so, and if we're willing to lock that in as part 
>>> of the ABI design, I think we should write—or at least allow for—a custom 
>>> Int-indexed collection, because this may allow us to recurse into 
>>> associated value types. If we aren't going to have suitable metadata, 
>>> though, I agree we should just use an Array. There are pathological cases 
>>> where instantiating a large Array might be burdensome, but sometimes you 
>>> just have to ignore the pathological cases.
>>> 
>>> (The "infinite recursion" problem with associated values is actually 
>>> relatively easy to solve, by the way: Don't allow, or at least don't 
>>> generate, `ValuesEnumerable` conformance on enums with `indirect` cases.)
>> 
>> This is the direction I think makes the most sense in terms of how we should 
>> approach synthesis.  The open question in my mind is what the exact 
>> requirement of the protocol should be.  Should it exactly match what we 
>> synthesize (`[Self]` or an associated `Collection where Iterator.Element == 
>> Self, Index == Int`) or whether the protocol should have a more relaxed 
>> requirement of `Sequence where Iterator.Element == Self` like Tony proposed.
>> 
>>> 
>>> -- 
>>> Brent Royal-Gordon
>>> Architechies
>>> 
>> ___
>> swift-evolution mailing list
>> swift-evolution@swift.org
>> https://lists.swift.org/mailman/listinfo/swift-evolution
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] pure functions

2017-09-09 Thread Dave Abrahams via swift-evolution


on Wed Aug 23 2017, Joe Groff 
 wrote: 

On Aug 18, 2017, at 12:10 PM, Chris Lattner via swift-evolution 
 wrote:  Splitting this out from the 
concurrency thread:  
 


On Aug 18, 2017, at 6:12 AM, Matthew Johnson 
 wrote: 
On Aug 17, 2017, at 11:53 PM, Chris Lattner 
 wrote:  
In the manifesto you talk about restrictions on passing 
functions across an actor message.  You didn’t discuss pure 
functions, presumably because Swift doesn’t have them yet. 
I imagine that if (hopefully when) Swift has compiler 
support for verifying pure functions these would also be 
safe to pass across an actor message.  Is that correct? 
 Correct.  The proposal is specifically/intentionally 
designed to be light on type system additions, but there are 
many that could make it better in various ways.  The logic 
for this approach is that I expect *a lot* of people will be 
writing mostly straight-forward concurrent code, and that 
goal is harmed by presenting significant type system hurdles 
for them to jump over, because that implies a higher learning 
curve.   This is why the proposal doesn’t focus on a provably 
memory safe system: If someone slaps “ValueSemantical” on a 
type that doesn’t obey, they will break the invariants of the 
system.  There are lots of ways to solve that problem 
(e.g. the capabilities system in Pony) but it introduces a 
steep learning curve.   I haven’t thought a lot about 
practically getting pure functions into Swift, because it 
wasn’t clear what problems it would solve (which couldn’t be 
solved another way).  You’re right though that this could be 
an interesting motivator. 
 I can provide a concrete example of why this is definitely 
and important motivator. My current project uses pure 
functions, value semantics and declarative effects at the 
application level and moves as much of the imperative code as 
possible (including effect handling) into library level code. 
This is working out really well and I plan to continue with 
this approach.  The library level code needs the ability to 
schedule user code in the appropriate context.  There will 
likely be some declarative ability for application level code 
to influence the context, priority, etc, but it is the library 
that will be moving the functions to the final context.  They 
are obviously not closure literals from the perspective of the 
library.   Pure functions are obviously important to the 
semantics of this approach.  We can get by without compiler 
verification, using documentation just as we do for protocol 
requirements that can't be verified.  That said, it would be 
pretty disappointing to have to avoid using actors in the 
implementation simply because we can't move pure functions 
from one actor to another as necessary.   To be clear, I am 
talking in the context of "the fullness of time".  It would be 
perfectly acceptable to ship actors before pure functions. 
That said, I do think it's crucial that we eventually have the 
ability to verify pure functions and move them around at will. 
 Right.  Pure functions are also nice when you care about 
thread safety, and there is a lot of work on this.  C has 
__attribute__((const)) and ((pure)) for example, c++ has 
constexpr, and many research languages have built full blown 
effects systems.   My principle concern is that things like 
this quickly become infectious: LOTS of things are pure 
functions, and requiring them all to be marked as such becomes 
a lot of boilerplate and conceptual overhead.  This is 
happening in the C++ community with constexpr for example. 
The secondary concern is that you need to build out the model 
enough that you don’t prevent abstractions.  A pure function 
should be able to create an instance of a struct, mutate it 
(i.e. calling non-pure functions) etc.  This requires a 
non-trivial design, and as the design complexity creeps, you 
run the risk of it getting out of control. 


Now that inout parameters are guaranteed exclusive, a mutating 
method on a struct or a function that takes inout parameters is 
isomorphic to one that consumes the initial value as a pure 
argument and returns the modified value back. This provides a 
value-semantics-friendly notion of purity, where a function can 
still be considered pure if the only thing it mutates is its 
unescaped local state and its inout parameters and it doesn't 
read or write any shared mutable state such as mutable globals, 
instance properties, or escaped variables. That gives you the 
ability to declare local variables and composably apply "pure" 
mutating operations to them inside a pure function. 

We've already brought Swift somewhat into the effects-system 
design space with "throws" (and "async", if it gets taken as 
we've currently proposed it), and we already have some 
abstraction debt to pay off with "throws"; if we wanted to, we 
could conceivably fold "impure" into that 

Re: [swift-evolution] [Proposal] Explicit Synthetic Behaviour

2017-09-09 Thread Xiaodi Wu via swift-evolution
On Sat, Sep 9, 2017 at 06:41 Haravikk via swift-evolution <
swift-evolution@swift.org> wrote:

> On 9 Sep 2017, at 09:33, Xiaodi Wu  wrote:
>
>
> On Sat, Sep 9, 2017 at 02:47 Haravikk via swift-evolution <
> swift-evolution@swift.org> wrote:
>
>>
>> On 9 Sep 2017, at 02:02, Xiaodi Wu  wrote:
>>
>> On Fri, Sep 8, 2017 at 4:00 PM, Itai Ferber via swift-evolution <
>> swift-evolution@swift.org> wrote:
>>
>>>
>>>
>>> On Sep 8, 2017, at 12:46 AM, Haravikk via swift-evolution <
>>> swift-evolution@swift.org> wrote:
>>>
>>>
>>> On 7 Sep 2017, at 22:02, Itai Ferber  wrote:
>>>
>>> protocol Fooable : Equatable { // Equatable is just a simple example
>>> var myFoo: Int { get }}
>>> extension Fooable {
>>> static func ==(_ lhs: Self, _ rhs: Self) -> Bool {
>>> return lhs.myFoo == rhs.myFoo
>>> }}
>>> struct X : Fooable {
>>> let myFoo: Int
>>> let myName: String
>>> // Whoops, forgot to give an implementation of ==}
>>> print(X(myFoo: 42, myName: "Alice") == X(myFoo: 42, myName: "Bob")) // true
>>>
>>> This property is *necessary*, but not *sufficient* to provide a correct
>>> implementation. A default implementation might be able to *assume* something
>>> about the types that it defines, but it does not necessarily know enough.
>>>
>>>
>>> Sorry but that's a bit of a contrived example; in this case the protocol
>>> should *not* implement the equality operator if more information may be
>>> required to define equality. It should only be implemented if the protocol
>>> is absolutely clear that .myFoo is the only part of a Fooable that can or
>>> should be compared as equatable, e.g- if a Fooable is a database record and
>>> .myFoo is a primary key, the data could differ but it would still be a
>>> reference to the same record.
>>>
>>> To be clear, I'm not arguing that someone can't create a regular default
>>> implementation that also makes flawed assumptions, but that
>>> synthesised/reflective implementations *by their very nature have to*,
>>> as they cannot under every circumstance guarantee correctness when using
>>> parts of a concrete type that they know nothing about.
>>>
>>> You can’t argue this both ways:
>>>
>>>- If you’re arguing this on principle, that in order for synthesized
>>>implementations to be correct, they *must* be able to — *under every
>>>circumstance* — guarantee correctness, then you have to apply the
>>>same reasoning to default protocol implementations. Given a default
>>>protocol implementation, it is possible to come up with a (no matter how
>>>contrived) case where the default implementation is wrong. Since you’re
>>>arguing this *on principle*, you cannot reject contrived examples.
>>>- If you are arguing this *in practice*, then you’re going to have
>>>to back up your argument with evidence that synthesized examples are more
>>>often wrong than default implementations. You can’t declare that
>>>synthesized implementations are *by nature* incorrect but allow
>>>default implementations to slide because *in practice*, many
>>>implementations are allowable. There’s a reason why synthesis passed code
>>>review and was accepted: in the majority of cases, synthesis was deemed 
>>> to
>>>be beneficial, and would provide correct behavior. If you are willing to
>>>say that yes, sometimes default implementations are wrong but overall
>>>they’re correct, you’re going to have to provide hard evidence to back up
>>>the opposite case for synthesized implementations. You stated in a 
>>> previous
>>>email that "A synthesised/reflective implementation however may
>>>return a result that is simply incorrect, because it is based on
>>>assumptions made by the protocol developer, with no input from the
>>>developer of the concrete type. In this case the developer must override 
>>> it
>>>in to provide *correct* behaviour." — if you can back this up with
>>>evidence (say, taking a survey of a large number of model types and see 
>>> if
>>>in the majority of cases synthesized implementation would be incorrect) 
>>> to
>>>provide a compelling argument, then this is something that we should in
>>>that case reconsider.
>>>
>>>
>> Well put, and I agree with this position 100%. However, to play devil's
>> advocate here, let me summarize what I think Haravikk is saying:
>>
>> I think the "synthesized" part of this is a red herring, if I understand
>> Haravikk's argument correctly. Instead, it is this:
>>
>> (1) In principle, it is possible to have a default implementation for a
>> protocol requirement that produces the correct result--though not
>> necessarily in the most performant way--for all possible conforming types,
>> where by conforming we mean that the type respects both the syntactic
>> requirements (enforced by the compiler) and the semantic requirements
>> (which may not necessarily be enforceable by 

Re: [swift-evolution] [Pitch] Improve `init(repeating:count)`

2017-09-09 Thread Dave Abrahams via swift-evolution


on Fri Aug 18 2017, Erica Sadun 
 wrote: 

On Aug 17, 2017, at 9:29 PM, Taylor Swift 
 
wrote: 
On Thu, Aug 17, 2017 at 9:06 PM, Erica Sadun via 
swift-evolution 
> 
wrote: 


 
On Aug 17, 2017, at 6:56 PM, Xiaodi Wu 
> 
wrote:  On Thu, Aug 17, 2017 at 7:51 PM, Erica Sadun 
> wrote: 
What people are doing is taking a real set of values (1, 2, 3, 
4, 5, for example), then discarding them via `_ in`, which is 
different from `Void -> T` or `f(x) = 0 * x`. The domain could 
just as easily be (Foo(), "b", , UIColor.red, { x: Int in 
x^x }). There are too many semantic shifts away from "I would 
like to collect the execution of this closure n times" for it 
to sit comfortably.   What arguments might help to alleviate 
this discomfort? Clearly, functions exist that can map this 
delightfully heterogeneous domain to some sort of range that 
the user wants. Would you feel better if we wrote instead the 
following?   ``` repeatElement((), count: 5).map { UIView() } 
``` 
 My favorite solution is the array initializer. Something along 
the lines of `Array(count n: Int, generator: () -> T)`. I'm 
not sure it _quite_ reaches standard library but I think it is 
a solid way to say "produce a collection with a generator run n 
times". It's a common task. I was asking around about this, and 
found that a lot of us who work with both macOS and iOS and 
want to stress test interfaces do this very often. Other use 
cases include "give me n random numbers", "give me n records 
from this database", etc. along similar lines.   The difference 
between this and the current `Array(repeating:count:)` 
initializer is switching the arguments and using a trailing 
closure (or an autoclosure) rather than a set value. That API 
was designed without the possibility that you might want to 
repeat a generator, so there's a bit of linguistic turbulence. 
-- E   To me at least, this is a very i-Centric complaint, 
since I can barely remember the last time I needed something 
like this for anything that didn’t involve UIKit. What you’re 
asking for is API sugar for generating reference types with 
less typing. 


No, that's what the original thread poster wanted. 

I want to avoid breaking math. 


I know it's late to chime in here, but IMO (mutable) reference 
types and the exposure of === “break math,” and I think that's the 
real effect you're seeing here.


--
-Dave

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Pitch: Improved Swift pointers

2017-09-09 Thread Dave Abrahams via swift-evolution


on Wed Aug 09 2017, Xiaodi Wu 
 wrote: 

On Wed, Aug 9, 2017 at 8:22 PM, Brent Royal-Gordon via 
swift-evolution < 
swift-evolution@swift.org> wrote: 

On Jul 19, 2017, at 11:21 AM, Taylor Swift via swift-evolution 
< swift-evolution@swift.org> 
wrote: 

What about `value:`? 

`ptr.initialize(value: value)` `ptr.initialize(value: value, 
count: 13)` `ptr.initialize(as: UInt16.self, at: 0, value: 
value, count: 13)` 
 
Doesn't read as a sentence. Consider how "initialize to 3" 
sounds different from "initialize value 3". 

Personally, I'd go with: 

ptr.initialize(to: value) ptr.initialize(to: value, 
repeatCount: 3) 

(Or just `repeat`/`repeating` if you don't feel like you need 
the word "count" to disambiguate.) 



Per Swift API naming guidelines, initializers don't have to read 
as sentences IIRC, and I'd be inclined to grant a function named 
`initialize(_:)` the same courtesy. 


I know it's late to chime in here, but: that's not an initializer.

--
-Dave

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Proposal] Explicit Synthetic Behaviour

2017-09-09 Thread Xiaodi Wu via swift-evolution
On Sat, Sep 9, 2017 at 07:51 Brent Royal-Gordon 
wrote:

> On Sep 8, 2017, at 6:03 PM, Xiaodi Wu via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> For any open protocol (i.e., a protocol for which the universe of possible
> conforming types cannot be enumerated a priori by the protocol designer)
> worthy of being a protocol by the Swift standard ("what useful thing can
> you do with such a protocol that you could not without?"), any sufficiently
> interesting requirement (i.e., one for which user ergonomics would
> measurably benefit from a default implementation) either cannot have a
> universally guaranteed correct implementation or has an implementation
> which is also going to be the most performant one (which can therefore be a
> non-overridable protocol extension method rather than an overridable
> protocol requirement with a default implementation).
>
>
> Counter-example: `index(of:)`, or rather, the underscored requirement
> underlying `index(of:)`. The "loop over all indices and return the first
> whose element matches" default implementation is universally guaranteed to
> be correct, but a collection like `Set` or `SortedArray` can provide an
> implementation which is more performant than the default.
>

Don't get me started on Swift's handling of equality and arrays with NaN.
_customIndexOfEquatable, if I'm not mistaken, is a part of that whole
tangle of performance optimizations which gleefully refuse to acknowledge
Equatable's semantic peephole for some values of a type being unordered
with respect to everything else. In a world where this trade-off between
performance and correctness had not been taken, I don't imagine that it
would be possible to make the protocol extension method 'index(of:)' any
more performant than 'index(where: { $0 == $1 })'.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [swift-evolution-announce] [Review] SE-0184: Unsafe[Mutable][Raw][Buffer]Pointer: add missing methods, adjust existing labels for clarity, and remove deallocation size

2017-09-09 Thread Andrew Trick via swift-evolution

> On Sep 9, 2017, at 3:15 AM, Jean-Daniel  wrote:
> 
> 
>> Le 8 sept. 2017 à 03:03, Andrew Trick via swift-evolution 
>>  a écrit :
>> 
>> 
>>> On Sep 7, 2017, at 5:37 PM, Joe Groff  wrote:
 
 The important thing is that the UnsafeBufferPointer API is clearly 
 documented. We do not want users to think it’s ok to deallocate a smaller 
 buffer than they allocated.
 
 Unfortunately, there’s actually no way to assert this in the runtime 
 because malloc_size could be larger than the allocated capacity. Incorrect 
 code could happen to work and we can live with that.
>>> 
>>> Would it be sufficient to assert that malloc_good_size(passedCapacity) == 
>>> malloc_size(base) ? It wouldn't be perfect but could still catch a lot of 
>>> misuses.
>> 
>> That theory does hold up for a million random values, but I don’t know if we 
>> can rely on malloc_size never being larger than roundUp(sz, 16). Greg?
> 
> You can’t. This may be true while alloc size if less than a page, but a quick 
> test show that:
> 
> malloc_size(malloc(4097)) = 4608

Thanks, I was being a bit silly...
We also have malloc_good_size(4097) = 4608.

What I was getting at is, can malloc_good_size be “dumb” for any legal 
implementation of malloc zones?

Or can we assert malloc_good_size(x) == malloc_size(malloc(x)?

-Andy

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Proposal] Explicit Synthetic Behaviour

2017-09-09 Thread Gwendal Roué via swift-evolution
Hello Haravikk,

I'lm worried that you fail at preventing a real problem. May I suggest a change 
in your strategy?

Sometimes, sample code greatly helps turning subtle ideas into blatant 
evidence. After all, subtleties are all about corner cases, and corner cases 
are the blind spots of imagination. What about giving that little something 
that would help your readers grasp your arguments?

I don't quite know what example you will provide, but I could suggest the 
exhibition of a practical problem with Equatable synthesis. We'll know better 
if the problem can arise in the Standard lib, in third-party libraries, at 
application level, or at several scales at the same time. It would also be nice 
to see your solution to the problem, that is to say an alternative that still 
provides code synthesis for developers that want to opt in the feature, but 
avoids the caveat of the initial example. I hope this would greatly help the 
discussion move forward.

Last general comment about the topic: if Haravikk is right, and that code 
synthesis should indeed be explicit, then that wouldn't be such a shame.

My two cents,
Gwendal Roué


> Le 9 sept. 2017 à 13:41, Haravikk via swift-evolution 
>  a écrit :
> 
>> 
>> On 9 Sep 2017, at 09:33, Xiaodi Wu > > wrote:
>> 
>> 
>> On Sat, Sep 9, 2017 at 02:47 Haravikk via swift-evolution 
>> > wrote:
>> 
>>> On 9 Sep 2017, at 02:02, Xiaodi Wu >> > wrote:
>>> 
>>> On Fri, Sep 8, 2017 at 4:00 PM, Itai Ferber via swift-evolution 
>>> > wrote:
>>> 
>>> 
 On Sep 8, 2017, at 12:46 AM, Haravikk via swift-evolution 
 > wrote:
 
 
> On 7 Sep 2017, at 22:02, Itai Ferber  > wrote:
> 
> protocol Fooable : Equatable { // Equatable is just a simple example
> var myFoo: Int { get }
> }
> 
> extension Fooable {
> static func ==(_ lhs: Self, _ rhs: Self) -> Bool {
> return lhs.myFoo == rhs.myFoo
> }
> }
> 
> struct X : Fooable {
> let myFoo: Int
> let myName: String
> // Whoops, forgot to give an implementation of ==
> }
> 
> print(X(myFoo: 42, myName: "Alice") == X(myFoo: 42, myName: "Bob")) // 
> true
> This property is necessary, but not sufficient to provide a correct 
> implementation. A default implementation might be able to assume 
> something about the types that it defines, but it does not necessarily 
> know enough.
 
 Sorry but that's a bit of a contrived example; in this case the protocol 
 should not implement the equality operator if more information may be 
 required to define equality. It should only be implemented if the protocol 
 is absolutely clear that .myFoo is the only part of a Fooable that can or 
 should be compared as equatable, e.g- if a Fooable is a database record 
 and .myFoo is a primary key, the data could differ but it would still be a 
 reference to the same record.
 
 To be clear, I'm not arguing that someone can't create a regular default 
 implementation that also makes flawed assumptions, but that 
 synthesised/reflective implementations by their very nature have to, as 
 they cannot under every circumstance guarantee correctness when using 
 parts of a concrete type that they know nothing about.
>>> 
>>> You can’t argue this both ways:
>>> If you’re arguing this on principle, that in order for synthesized 
>>> implementations to be correct, they must be able to — under every 
>>> circumstance — guarantee correctness, then you have to apply the same 
>>> reasoning to default protocol implementations. Given a default protocol 
>>> implementation, it is possible to come up with a (no matter how contrived) 
>>> case where the default implementation is wrong. Since you’re arguing this 
>>> on principle, you cannot reject contrived examples.
>>> If you are arguing this in practice, then you’re going to have to back up 
>>> your argument with evidence that synthesized examples are more often wrong 
>>> than default implementations. You can’t declare that synthesized 
>>> implementations are by nature incorrect but allow default implementations 
>>> to slide because in practice, many implementations are allowable. There’s a 
>>> reason why synthesis passed code review and was accepted: in the majority 
>>> of cases, synthesis was deemed to be beneficial, and would provide correct 
>>> behavior. If you are willing to say that yes, sometimes default 
>>> implementations are wrong but overall they’re correct, you’re going to have 
>>> to provide hard evidence to back up the opposite case for synthesized 
>>> 

Re: [swift-evolution] [Pitch] Synthesized static enum property to iterate over cases

2017-09-09 Thread Matthew Johnson via swift-evolution


Sent from my iPad

> On Sep 9, 2017, at 7:33 AM, Brent Royal-Gordon  wrote:
> 
>> On Sep 8, 2017, at 5:14 PM, Xiaodi Wu via swift-evolution 
>>  wrote:
>> 
>> Here, people just want an array of all cases. Give them an array of all 
>> cases. When it's not possible (i.e., in the case of cases with associated 
>> values), don't do it.
> 
> 
> I agree it should be Int-indexed; that seems to be what people want from this.
> 
> I seem to recall that there is information about the available enum cases in 
> the module metadata. If so, and if we're willing to lock that in as part of 
> the ABI design, I think we should write—or at least allow for—a custom 
> Int-indexed collection, because this may allow us to recurse into associated 
> value types. If we aren't going to have suitable metadata, though, I agree we 
> should just use an Array. There are pathological cases where instantiating a 
> large Array might be burdensome, but sometimes you just have to ignore the 
> pathological cases.
> 
> (The "infinite recursion" problem with associated values is actually 
> relatively easy to solve, by the way: Don't allow, or at least don't 
> generate, `ValuesEnumerable` conformance on enums with `indirect` cases.)

This is the direction I think makes the most sense in terms of how we should 
approach synthesis.  The open question in my mind is what the exact requirement 
of the protocol should be.  Should it exactly match what we synthesize 
(`[Self]` or an associated `Collection where Iterator.Element == Self, Index == 
Int`) or whether the protocol should have a more relaxed requirement of 
`Sequence where Iterator.Element == Self` like Tony proposed.

> 
> -- 
> Brent Royal-Gordon
> Architechies
> 
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Proposal] Explicit Synthetic Behaviour

2017-09-09 Thread Brent Royal-Gordon via swift-evolution
> On Sep 8, 2017, at 6:03 PM, Xiaodi Wu via swift-evolution 
>  wrote:
> 
> For any open protocol (i.e., a protocol for which the universe of possible 
> conforming types cannot be enumerated a priori by the protocol designer) 
> worthy of being a protocol by the Swift standard ("what useful thing can you 
> do with such a protocol that you could not without?"), any sufficiently 
> interesting requirement (i.e., one for which user ergonomics would measurably 
> benefit from a default implementation) either cannot have a universally 
> guaranteed correct implementation or has an implementation which is also 
> going to be the most performant one (which can therefore be a non-overridable 
> protocol extension method rather than an overridable protocol requirement 
> with a default implementation). 

Counter-example: `index(of:)`, or rather, the underscored requirement 
underlying `index(of:)`. The "loop over all indices and return the first whose 
element matches" default implementation is universally guaranteed to be 
correct, but a collection like `Set` or `SortedArray` can provide an 
implementation which is more performant than the default.

-- 
Brent Royal-Gordon
Architechies

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Pitch] Synthesized static enum property to iterate over cases

2017-09-09 Thread Brent Royal-Gordon via swift-evolution
> On Sep 8, 2017, at 5:14 PM, Xiaodi Wu via swift-evolution 
>  wrote:
> 
> Here, people just want an array of all cases. Give them an array of all 
> cases. When it's not possible (i.e., in the case of cases with associated 
> values), don't do it.


I agree it should be Int-indexed; that seems to be what people want from this.

I seem to recall that there is information about the available enum cases in 
the module metadata. If so, and if we're willing to lock that in as part of the 
ABI design, I think we should write—or at least allow for—a custom Int-indexed 
collection, because this may allow us to recurse into associated value types. 
If we aren't going to have suitable metadata, though, I agree we should just 
use an Array. There are pathological cases where instantiating a large Array 
might be burdensome, but sometimes you just have to ignore the pathological 
cases.

(The "infinite recursion" problem with associated values is actually relatively 
easy to solve, by the way: Don't allow, or at least don't generate, 
`ValuesEnumerable` conformance on enums with `indirect` cases.)

-- 
Brent Royal-Gordon
Architechies

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Proposal] Random Unification

2017-09-09 Thread Brent Royal-Gordon via swift-evolution
> On Sep 8, 2017, at 2:46 PM, Jacob Williams via swift-evolution 
>  wrote:
> 
> What if we did it with something like this:
> 
> protocol RandomGenerator {
>   associated type T: Numeric // Since numeric types are the only kinds 
> where we could get a random number?
>   func uniform() -> T
>   // Other random type functions...
> }
> 
> Although if we didn’t constrain T to Numeric then collections could also 
> conform to it, although I’m not sure that collections would want to directly 
> conform to this. There may need to be a separate protocol for types with 
> Numeric indexes?
> 
> I’m no pro and really haven’t thought about this too deeply. Mostly just 
> spitballing/brainstorming.

I think I would simply say:

/// Conforming types generate an infinite sequence of random bits 
through their `next()` method.
/// They may be generated from a repeatable seed or from a source of 
true entropy.
protocol Randomizer: class, IteratorProtocol, Sequence where Element == 
UInt, Iterator == Self {
/// Generates and returns the next `UInt.bitWidth` bits of 
random data.
func next() -> UInt
}

And have this extension on it:

extension Randomizer {
/// Permits the use of a Randomizer as a plain old iterator.
func next() -> UInt? {
return Optional.some(next())
}

/// Returns a number in the range 0 ... maximum.
/// (This is inclusive to allow `maximum` to be `UInt.max`.)
func next(through maximum: UInt) -> UInt {
…
}
}

We should also provide a singleton `StrongRandomizer`:

/// A source of cryptographically secure random data.
/// 
/// `StrongRandomizer` typically uses the strongest random data source 
provided by  
/// the platform that is suitable for relatively frequent use. It may 
use a hardware RNG 
/// directly, or it may use a PRNG seeded by a good entropy source.
/// 
/// `StrongRandomizer` is inherently a singleton.
/// It can be used on multiple threads, and on some platforms it may 
block while its 
/// shared state is locked.
class StrongRandomizer: Randomizer {
static let shared = StrongRandomizer()

func next() -> UInt {
…
}
}

Finally, we can add extensions to `RandomAccessCollection`:

extension RandomAccessCollection {
func randomElement(with randomizer: Randomizer = 
StrongRandomizer.shared) -> Element? {
guard !isEmpty else { return nil }
let offset = IndexDistance(randomizer.next(through: 
UInt(count) - 1))
return self[index(startIndex, offsetBy: offset)]
}
}

And ranges over `BinaryFloatingPoint`:

extension Range where Bound: BinaryFloatingPoint {
func randomElement(with randomizer: Randomizer = 
StrongRandomizer.shared) -> Bound? {
…
}
}
extension ClosedRange where Bound: BinaryFloatingPoint {
func randomElement(with randomizer: Randomizer = 
StrongRandomizer.shared) -> Bound {
…
}
}

A couple of notes:

• Fundamentally, a randomizer is an infinite sequence of random-ish bits. I say 
"random-ish" because you may be generating a repeatable pseudo-random sequence 
from a seed. But in any case, I think it's best to envision this as a special 
case of an iterator—hence the `IteratorProtocol` conformance.

• Randomizer is class-constrained because we want to encourage providing a 
defaulted parameter for the randomizer, and `inout` parameters can't be 
defaulted.

• `UInt` is intentionally inconvenient. You should not usually use a randomizer 
directly—you should pass it to one of our methods, which know how to handle it 
correctly. Making it awkward discourages direct use of the randomizer.

• `next(through:)` is provided simply to discourage incorrect modulo-ing.

• `StrongRandomizer` is provided as a slow but safe default. I don't think we 
should ship any other RNGs; people who want them can import them from modules, 
which provides at least a bit of evidence that they know what they're doing. 
(Actually, I could see adding a second RNG that's even stronger, and is 
specifically intended to be used infrequently to generate seeds or important 
cryptographic keys.)

-- 
Brent Royal-Gordon
Architechies

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Proposal] Explicit Synthetic Behaviour

2017-09-09 Thread Haravikk via swift-evolution

> On 9 Sep 2017, at 09:33, Xiaodi Wu  wrote:
> 
> 
> On Sat, Sep 9, 2017 at 02:47 Haravikk via swift-evolution 
> > wrote:
> 
>> On 9 Sep 2017, at 02:02, Xiaodi Wu > > wrote:
>> 
>> On Fri, Sep 8, 2017 at 4:00 PM, Itai Ferber via swift-evolution 
>> > wrote:
>> 
>> 
>>> On Sep 8, 2017, at 12:46 AM, Haravikk via swift-evolution 
>>> > wrote:
>>> 
>>> 
 On 7 Sep 2017, at 22:02, Itai Ferber > wrote:
 
 protocol Fooable : Equatable { // Equatable is just a simple example
 var myFoo: Int { get }
 }
 
 extension Fooable {
 static func ==(_ lhs: Self, _ rhs: Self) -> Bool {
 return lhs.myFoo == rhs.myFoo
 }
 }
 
 struct X : Fooable {
 let myFoo: Int
 let myName: String
 // Whoops, forgot to give an implementation of ==
 }
 
 print(X(myFoo: 42, myName: "Alice") == X(myFoo: 42, myName: "Bob")) // true
 This property is necessary, but not sufficient to provide a correct 
 implementation. A default implementation might be able to assume something 
 about the types that it defines, but it does not necessarily know enough.
>>> 
>>> Sorry but that's a bit of a contrived example; in this case the protocol 
>>> should not implement the equality operator if more information may be 
>>> required to define equality. It should only be implemented if the protocol 
>>> is absolutely clear that .myFoo is the only part of a Fooable that can or 
>>> should be compared as equatable, e.g- if a Fooable is a database record and 
>>> .myFoo is a primary key, the data could differ but it would still be a 
>>> reference to the same record.
>>> 
>>> To be clear, I'm not arguing that someone can't create a regular default 
>>> implementation that also makes flawed assumptions, but that 
>>> synthesised/reflective implementations by their very nature have to, as 
>>> they cannot under every circumstance guarantee correctness when using parts 
>>> of a concrete type that they know nothing about.
>> 
>> You can’t argue this both ways:
>> If you’re arguing this on principle, that in order for synthesized 
>> implementations to be correct, they must be able to — under every 
>> circumstance — guarantee correctness, then you have to apply the same 
>> reasoning to default protocol implementations. Given a default protocol 
>> implementation, it is possible to come up with a (no matter how contrived) 
>> case where the default implementation is wrong. Since you’re arguing this on 
>> principle, you cannot reject contrived examples.
>> If you are arguing this in practice, then you’re going to have to back up 
>> your argument with evidence that synthesized examples are more often wrong 
>> than default implementations. You can’t declare that synthesized 
>> implementations are by nature incorrect but allow default implementations to 
>> slide because in practice, many implementations are allowable. There’s a 
>> reason why synthesis passed code review and was accepted: in the majority of 
>> cases, synthesis was deemed to be beneficial, and would provide correct 
>> behavior. If you are willing to say that yes, sometimes default 
>> implementations are wrong but overall they’re correct, you’re going to have 
>> to provide hard evidence to back up the opposite case for synthesized 
>> implementations. You stated in a previous email that "A 
>> synthesised/reflective implementation however may return a result that is 
>> simply incorrect, because it is based on assumptions made by the protocol 
>> developer, with no input from the developer of the concrete type. In this 
>> case the developer must override it in to provide correct behaviour." — if 
>> you can back this up with evidence (say, taking a survey of a large number 
>> of model types and see if in the majority of cases synthesized 
>> implementation would be incorrect) to provide a compelling argument, then 
>> this is something that we should in that case reconsider.
>> 
>> Well put, and I agree with this position 100%. However, to play devil's 
>> advocate here, let me summarize what I think Haravikk is saying:
>> 
>> I think the "synthesized" part of this is a red herring, if I understand 
>> Haravikk's argument correctly. Instead, it is this:
>> 
>> (1) In principle, it is possible to have a default implementation for a 
>> protocol requirement that produces the correct result--though not 
>> necessarily in the most performant way--for all possible conforming types, 
>> where by conforming we mean that the type respects both the syntactic 
>> requirements (enforced by the compiler) and the semantic requirements (which 
>> may not necessarily be enforceable by the 

Re: [swift-evolution] [Proposal] Random Unification

2017-09-09 Thread Jean-Daniel via swift-evolution

> Le 9 sept. 2017 à 03:07, Xiaodi Wu via swift-evolution 
>  a écrit :
> 
> On Fri, Sep 8, 2017 at 7:50 PM, Stephen Canon  > wrote:
>> On Sep 8, 2017, at 8:09 PM, Xiaodi Wu via swift-evolution 
>> > wrote:
>> 
>> This topic has been broached on Swift Evolution previously. It's interesting 
>> to me that Steve Canon is so certain that CSPRNGs are the way to go. I 
>> wasn't aware that hardware CSPRNGs have come such a long way and are so 
>> ubiquitous as to be feasible as a basis for Swift random numbers. If so, 
>> great.
>> 
>> Otherwise, if there is any way that a software, non-cryptographically secure 
>> PRNG is going to outperform a CSPRNG, then I think it's worthwhile to have a 
>> (carefully documented) choice between the two. I would imagine that for many 
>> uses, such as an animation in which you need a plausible source of noise to 
>> render a flame, whether that is cryptographically secure or not is 
>> absolutely irrelevant but performance may be key.
> 
> Let me be precise: it is absolutely possible to outperform CSPRNGs. They have 
> simply become fast enough that the performance gap doesn’t matter for most 
> uses (let’s say amortized ten cycles per byte or less—whatever you are going 
> to do with the random bitstream will be much more expensive than getting the 
> bits was).
> 
> That said, yes, there should definitely be other options. It should be 
> possible for users to get reproducible results from a stdlib random interface 
> run-to-run, and also across platforms. That alone requires that at least one 
> other option for a generator be present. There may also be a place for a very 
> high-throughput generator like xorshiro128+.
> 
> All I’m really saying is that the *default* generator should be an 
> os-provided unseeded CSPRNG, and we should be very careful about documenting 
> any generator options.
> 
> 
> Agree on all points. Much like Swift's strings are Unicode-correct instead of 
> the fastest possible way of slicing and dicing sequences of ASCII characters, 
> Swift's default PRNG should be cryptographically secure.

I agree too. Anyone that need a random generator but don’t know how it works 
and what’s a CSPRNG is need a CSPRNG.

For other users, we still need a couple of other implementations.



___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution