Hi Gwendal,

I hear your frustration. Some comments inline.

> On May 31, 2017, at 5:36 AM, Gwendal Roué <[email protected]> wrote:
> 
> Itai,
> 
> (This email is not technical)
> 
> I'm not claiming that SE-0166 should be able to address all archival formats. 
> I've been talking about GRDB to show at least one format that SE-0166 doesn't 
> cover well. And should SE-0166 be fixed to support SQL (in the GRDB fashion), 
> this does not mean that other developers won't eventually fight with SE-0166 
> until they understand it does not fit their bill.
I’ll respond to the technical portion of this thread in the other email, but 
let me at least provide some background here. When working on this feature, we 
thought for a very long time about what we were looking to support with this 
feature, and how (feel free to take a look at the Alternatives Considered 
section of the proposal, though of course, there were more attempts and 
approaches before that).
The majority of this thought was put into figuring out what the proper 
abstractions were for applying this new API — how can we abstract over 
different archival and serialization formats in a way that makes this useful?

In truth, if you try to abstract over all archival and serialization formats, 
the abstraction that you get is... the empty set. :) There are simply so many 
different things at odds with one another across different formats (JSON 
supports null values, plist does not; numbers are arbitrary precision in JSON, 
but not in plist or MessagePack or others; plist and MessagePack and others 
support binary data blobs, but JSON does not; etc.) that if you try to abstract 
over them all, you end up with nothing useful — an empty protocol that covers 
nothing.

So the key here is to try to strike a pragmatic balance between supporting some 
of the most common archival and serialization formats in a way that makes them 
useful, even if we have to handle special cases in some of them (e.g. null 
values in plist, binary data in JSON, etc.). It’s true that we cannot support 
them all, but in fact, we’re not looking to, because it would weaken the API.

I will respond to the comments specific to GRDB in the other thread, but this 
is a bit of background. Yes, there will always developers who will not be able 
to fit a serialization format into this API because it is fundamentally 
different in a way that cannot fit with the rest of the formats we’re looking 
to support. There’s nothing to be done about that. But, you mention this 
yourself.

> But there's something very special with SE-0166:
> 
> It's in the standard library, with all the backward-compatibility constraints 
> that come with such a position.
> 
> IT'S BLESSED WITH CODE GENERATION.
> 
> I don't know if you, Michael LeHew, Tony Parker, and the core team, realize 
> the importance of this insanely great privilege granted to this proposal.
Believe me, I do, because we considered a lot of different approaches before 
settling on this. We wanted to avoid code generation for this reason — it has a 
privileged place within the compiler, it generates code which the user may not 
be able to introspect, etc.
At the end of the day, though, we decided on this option because it provided 
the best user experience as part of the language in the vast majority of cases. 
There’s a lot to be said for that, and you mention this yourself, too.

> The lack of introspection and macros in Swift makes SE-0166 immensely 
> attractive for a whole category of libraries.
> 
> When SE-0166 is lacking, should those libs ignore it, and lose CODE 
> GENERATION, which means looking like it's still Swift 3?
> 
> Should those libs claim SE-0166 conformance, and raise runtime errors for 
> invalid inputs (where "invalid" does not mean "invalid data", or "invalid 
> code", but "impossible to fit in SE-0166" <=> "invalid library")?
That being said, let’s separate the capabilities of the Codable API itself from 
the code generated by the compiler for it. While the code generation is a huge 
convenience for the majority of simple cases, it does just that — generate code 
for the simple cases. We cannot arbitrarily generate code to match arbitrary 
applications. Much more is possible with custom encode/decode implementations 
and custom CodingKeys than you might imagine, rather than just sticking to the 
default, compiler-generated implementation. (Data migration, format-specific 
encoded representations, multiple sets of CodingKeys, etc.)

If a library finds use for the Codable APIs only for the code generation, then 
I think that’s likely misapplication of the API. Attempting to use the Codable 
API to fit a square peg into a round hole will be frustrating because, well, it 
was designed for a singular purpose.
The code generation that comes with Codable is meant for archival and 
serialization, not for arbitrary introspection. You’re right in that there is 
an overlap here (and I think the key pain point is that we need better tools 
for doing introspection, macros, and compile-time metaprogramming), but this is 
not a problem that this API is meant to solve. If a library cannot use the code 
generated by the Codable API, it’s not an "invalid library" — it’s just a poor 
fit. A full-featured compile-time macro/introspection system will require 
further thought and discussion, but the current Codable feature will eventually 
fit into that.

> I'd like to hear a little better than that :-) GRDB is a library of unusual 
> quality (sorry for the auto-congratulation). Until now, fatal errors thrown 
> by GRDB were always a sign of programmer mistake. Not of defects in the 
> foundations. I wish this would remain true.
> 
> Less caveats and runtime/fatal errors mean less user frustration.
> 
> Less caveats also mean less documentation to write. Ideally, this should be 
> enough: https://github.com/groue/GRDB.swift/tree/Swift4#codable-records 
> <https://github.com/groue/GRDB.swift/tree/Swift4#codable-records>There are 
> always going to be cases where certain input cannot be limited in the type 
> system. Consider JSON, which does not support NaNs or Infinity. There is no 
> way to statically prevent that input via the type system, so yes, there has 
> to be a runtime error thrown for that. That doesn’t make JSON invalid for 
> this API; we just add a strategy to the encoder to allow users to customize 
> what happens there. These errors don’t have to be fatal errors — we have 
> EncodingError.invalidValue to cover just this issue; it is just not possible 
> to cover all possible properties and all possible values of inputs statically.

It’s up to Encoders and Decoders to make the decisions that allow them to fit 
nicely within this. If they cannot, then it might be that they are not a good 
fit for this API.

> Gwendal
> Just a guy that write Swift apps and libraries
> 
> 
>> Le 30 mai 2017 à 20:49, Itai Ferber <[email protected] 
>> <mailto:[email protected]>> a écrit :
>> 
>> Hi Gwendal,
>> 
>> There are no stupid questions — everything helps hammer out this API, so I 
>> appreciate you taking the time to look at this so deeply.
>> I have to confess that I’m not familiar with this concept, but let’s take a 
>> look:
>> 
>> if let valueType = T.self as? DatabaseValueConvertible.Type {
>>     // if column is missing, trigger the "missing key" error or return nil.
>> } else if let complexType = T.self as? RowConvertible.Type {
>>     // if row scope is missing, trigger the "missing key" error or return 
>> nil.
>> } else {
>>     // don't know what to do
>>     fatalError("unsupported")
>> }
>> Is it appropriate for a type which is neither DatabaseValueConvertible nor 
>> RowConvertible to be decoded with your decoder? If not, then this warrants a 
>> preconditionFailure or an error of some sort, right? In this case, that 
>> would be valid.
>> 
>> You also mention that "it’s still impossible to support other Codable types" 
>> — what do you mean by this? Perhaps there’s a way to accomplish what you’re 
>> looking to do.
>> In any case, one option (which is not recommended unless if there are other 
>> avenues to solve this by) is to perform a "dry run" decoding. Attempt to 
>> decode the type with a dummy decoder to see what container it will need, 
>> then prepare your approach and do it again for real. Obviously, this isn’t a 
>> clean way to do it if we can find alternatives, but it’s an option.
>> 
>> — Itai
>> 
>> On 29 May 2017, at 4:51, Gwendal Roué via swift-evolution wrote:
>> 
>> Hello,
>> 
>> I have already asked stupid questions about SE-0167 and SE-0166, but this 
>> time I hope this is a real one.
>> 
>> According so SE-0166, codable types themselves instantiate a single value 
>> decoder, or a keyed container:
>> 
>> public struct Farm : Codable {
>> public init(from decoder: Decoder) throws {
>> let container = try decoder.container(keyedBy: CodingKeys.self
>> ...
>> }
>> }
>> 
>> public enum Animal : Int, Codable {
>> public init(from decoder: Decoder) throws
>> let intValue = try decoder.singleValueContainer().decode(Int.self)
>> ...
>> }
>> }
>> 
>> According to SE-0167, decoder decode non-trivial types in their 
>> decode(_:forKey:) and decodeIfPresent(_:forKey:) methods:
>> 
>> func decode<T>(_ type: T.Type, forKey key: Key) throws -> T where T : 
>> Decodable
>> func decodeIfPresent<T>(_ type: T.Type, forKey key: Key) throws -> T? where 
>> T : Decodable
>> 
>> My trouble is that the decoder does not know whether the Decodable type will 
>> ask for a keyed container, or for a single value container.
>> 
>> Why is it a problem?
>> 
>> In the context of decoding of SQL rows, keys may refer to different things, 
>> depending on whether we are decoding a *value*, or a *complex object*:
>> 
>> - for values, keys are column names, as everybody can expect
>> - for complex objects, keys are names of "row scopes". Row scopes are a 
>> concept introduced by GRDB.swift and allows a type that knows how to consume 
>> `SELECT * FROM table1` to consume as well the results of `SELECT table1.*, 
>> table2.* FROM table1 JOIN table2` through a "scope" that presents the row in 
>> the shape expected by the consumer (here, only columns from table1).
>> 
>> This is supposed to allow support for types that contain both nested types 
>> and values (one of the goals of SE-0166 and SE-0167):
>> 
>> struct Compound : Codable {
>> let someStruct: SomeStruct // object that feeds on the "someStruct" scope
>> let name: String // value that feeds on the "name" column
>> }
>> 
>> The two decoding methods decode(_:forKey:) and decodeIfPresent(_:forKey:) 
>> can't be implemented nicely, because they don't know whether the decodable 
>> type will ask for a keyed container or a single value container, and thus 
>> they don't know whether they should look for the presence of a row scope, or 
>> of a column:
>> 
>> A workaround is to perform runtime checks on the GRDB protocols adopted by 
>> T, as below. But it's still impossible to support other codable types:
>> 
>> if let valueType = T.self as? DatabaseValueConvertible.Type {
>> // if column is missing, trigger the "missing key" error or return nil.
>> } else if let complexType = T.self as? RowConvertible.Type {
>> // if row scope is missing, trigger the "missing key" error or return nil.
>> } else {
>> // don't know what to do
>> fatalError("unsupported")
>> }
>> 
>> Do you have any advice?
>> 
>> Gwendal Roué
>> 
>> 
>> _______________________________________________
>> swift-evolution mailing list
>> [email protected] <mailto:[email protected]>
>> https://lists.swift.org/mailman/listinfo/swift-evolution 
>> <https://lists.swift.org/mailman/listinfo/swift-evolution>
> 

_______________________________________________
swift-evolution mailing list
[email protected]
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to