My bad for such a convoluted example. All of this is trying to say something rather simple:
I'm sure almost all of you who've worked with C++ have used Boost header-only libraries. Many of them are really nice. There are reasons why third-party Swift code might be distributed by one person and incorporated by another person in source form, a la Boost header-only libraries. One of these reasons would be that some compiler optimizations are not possible across module boundaries. With the implementation of your proposal, such third-party code could not be both incorporated as source (as opposed to a compiled module) and extended in certain ways without modifying the code itself. To me, that's a loss. On Thu, Apr 28, 2016 at 9:11 PM, Xiaodi Wu <[email protected]> wrote: > On Thu, Apr 28, 2016 at 8:32 PM, Erica Sadun <[email protected]> wrote: > >> >> On Apr 28, 2016, at 6:20 PM, Xiaodi Wu <[email protected]> wrote: >> >> On Thu, Apr 28, 2016 at 6:44 PM, Erica Sadun <[email protected]> >> wrote: >> >>> Can you give me a specific example of where this approach fails for you? >>> >>> -- E >>> >> >> >> Sure, I'll describe one (renaming some things for clarity and stripping >> out the meat of the code, because it's not relevant and because it's not >> elegant)-- >> >> In one file, I have: >> >> ``` >> class PortedTransfom { >> // this class was ported from C++ >> // it transforms input FP values to output values in a complicated way >> // it's a standalone entity and the algorithm is even under patent >> // (not owned by me, though it's legal for me to use it for my purposes) >> // for this reason, this ported code lives in its own file >> } >> ``` >> >> In another file, I have: >> >> ``` >> class MyAsinhTransform { >> // this class was written by me >> // nothing earth-shattering here >> } >> >> class MyLogTransform { >> // also written by me >> } >> >> class MyLinearTransform { >> // also written by me >> } >> ``` >> >> Transforming values one-at-a-time isn't fast enough, so in another file, >> I have: >> >> ``` >> import Accelerate >> >> protocol AcceleratedTransform { >> func scale(_: [Double]) -> [Double] >> func unscale(_: [Double]) -> [Double] >> // other functions here >> // some are already implemented in PortedTransform, though >> } >> extension AcceleratedTransform { >> // default implementations for some functions >> // but not `scale(_:)` and `unscale(_:)`, obviously >> } >> >> extension MyAsinhTransform : AcceleratedTransform { >> // use BLAS to implement scale(_:) and unscale(_:) >> // and override some default implementations >> } >> >> extension MyLogTransform : AcceleratedTransform { >> // use BLAS to implement scale(_:) and unscale(_:) >> // and override some default implementations >> } >> >> extension MyLinearTransform : AcceleratedTransform { >> // use BLAS to implement scale(_:) and unscale(_:) >> // and override some default implementations >> } >> >> extension PortedTransform : AcceleratedTransform { >> // use BLAS to implement scale(_:) and unscale(_:) >> } >> ``` >> >> >> I think I'm missing something here in terms of a question. Your imported >> stuff is your imported stuff. >> Your extension implements "required" elements but not scale or unscale. >> >> If you extend MyAsinhTransform, you do required but not override for >> scale/unscale. You do required override for anything you replace from >> AcceleratedTransform. >> What is BLAS? And what are you specifically asking about? >> >> -- E, apologizing for not understanding >> >> > Sorry, stripped out a little too much, I guess. Let me expand a little: > > In this example, `PortedTransform` has, by virtue of how it works, an > upper bound and lower bound for valid input (among other interesting > methods and properties). Exceed those bounds for your input and > `PortedTransform` regurgitates garbage but does not throw any kind of > error. Obviously, linear transform does not care about such silly things > because it can transform essentially any FP input value, while the log > transform simply traps when it encounters a negative value (which, as a > precondition, it should never encounter). > > BLAS is an accelerated linear algebra library; Apple has implemented a > very nicely optimized one as part of its Accelerate framework. I use BLAS > to sum, for example, two arrays of floating point values--it's very, very > highly optimized. In that situation, there's no trapping when a single > value is out of bounds (I get NaNs instead), and thus I must determine > bounds in order to anticipate when the output will be garbage or NaN. > (There are, as part of the Accelerate framework, accelerated functions to > clamp entire arrays to given bounds with maximal efficiency). > > For accelerated scaling and unscaling, then, it is essentially always > necessary to compute upper and lower bounds even when that's unnecessary > for non-accelerated scaling and unscaling, which operates on one value at a > time. For that reason, `AcceleratedTransform` requires methods that compute > upper and lower bounds, and provides a default implementation of > accelerated clamping that calls those bound-computing methods and then uses > the results as parameters when calling functions in Accelerate.framework. > Methods for the computation of bounds already exist in `PortedTransform` > but not in my own transforms. With your proposal, how would I retroactively > model this requirement without touching code for `PortedTransform` and > without compiling this one class into its own library? I'd like to be able > to take advantage of the maximum possible compiler optimization, and > optimizing across module boundaries is (as far as I understand) a little > dicier. (Moreover, for MyLinTransform, I override the clamping method to > return the input without calling out to any framework functions because I > know a priori that the bounds are -infinity and infinity. I think that > override will still be possible under your proposal, though.) > >
_______________________________________________ swift-evolution mailing list [email protected] https://lists.swift.org/mailman/listinfo/swift-evolution
