It looks like we're not going to agree on this... On Thu, Jul 10, 2014 at 3:59 PM, Jonathan S. Shapiro <[email protected]> wrote: > I think you are missing something very, very important. > > For the most part, the big advantage of Python/Perl/etc. for prototyping is > brevity: there's lots of stuff you don't have to write or think about. Type > inference spans most of that gap, and I think it will be important to > acceptance that BitC gives some of that prototyping feel.
You're right, I didn't realize BitC was aiming to be any good for prototyping. > But the more important issue is that humans can't write static types > correctly. In today's systems, the types involved are complex enough that > when humans are asked to state them they come up with typings that are not > complete. Are you saying _no one_ will understand the type system well enough to write good types? I don't think so. It sounds like you want BitC to be useful to people who don't know how to use it. > These two things become especially important when we start implementing DSLs > using mixfix, which should be wildly popular if we can get that right. That sort of thing has always seemed kludgy to me. I had a bad experience trying to do a DSL with Java. I knew it wouldn't look very good, but it turned out it couldn't be made to look _remotely_ good, because of Java's almost-nonexistant support for controlling quantifier order. (That is, when using wildcards.) If you cut corners, you might hit a wall. It seems to me that the real way to make typed DSLs is to start with a powerful enough type system. >> On Wed, Jul 9, 2014 at 10:06 PM, Jonathan S. Shapiro <[email protected]> >> wrote: >> > >> > 1. Require type annotations, at least in certain cases >> > 2. Give up complete inference, identifying the cases where the inference >> > algorithm will require annotations. >> >> I'm not sure I can tell the difference in meaning between those two. >> Are you saying only in the second case would we have some precise >> general understanding of the limitations of the inferencer? > > I'm saying that if you are willing to sacrifice completeness of inference, > you can choose not to attempt to infer some things in order to stay > decidable. Um, OK. I think that answers my question. > One problem with that is that if the programmer cannot correctly express the > types, then it's not realistic to expect them to override the inference > engine in places where the inference engine isn't doing the job > automatically. Right. People who would be able to improve the inferencer would probably be BitC language experts. I'm not saying every BitC programmer would have their own pet type inferencer. > But BitC already has a lot more annotation than, say, Haskell. It could well > turn out that our termination conditions are different as a result. Then it doesn't actually sound much good for prototyping. _______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
