On Tue, Jun 16, 2015 at 5:43 PM, Matt Rice <ratm...@gmail.com> wrote:
> On Tue, Jun 16, 2015 at 1:06 PM, Matt Oliveri <atma...@gmail.com> wrote:
>> On Tue, Jun 16, 2015 at 7:05 AM, Matt Rice <ratm...@gmail.com> wrote:
>
> (reordering a bit to reply to both these things below)
>
>>> and frustratingly set
>>> up so that you cannot speak about type's return value, or specify some
>>> return values future type existentially,
>>
>> I don't understand. A type doesn't have a return value. And what is a
>> "future type"? You're not talking about futures, are you?
>
> In a sense yes, parsers and languages are *full* of stuff which will
> only be representable in the future.

I mean this "futures":
http://en.wikipedia.org/wiki/Futures_and_promises

> value:type is one of these things,

I think you're being _too_ creative here. The typing judgement is just
some metalanguage syntax. I don't see how it could be something whose
execution involve futures, or any side effects. It's unconventional to
even think of it as being executable.

> so we collect all the arguments *before* we can solve whatever
> it is we are trying prove, it is useful if everything is consistent
> across phases.

A thing to prove is a statement. It's syntax. You don't need to
evaluate anything. So I don't see how we'd be waiting on futures in
"value:type".

>>> its easy to fall under the spell of omniscient type values, rather
>>> than isolating phases from one another, and reusing the language.
>>
>> I don't know what you mean by "omniscient". But it kind of sounds like
>> you're against having code be reusable across phases.
>
> That is my main problem with type, is exactly that it doesn't have a
> return value,

If by return value, you mean the result of evaluating an expression,
then type expressions _can_ have a return value: a type
representation. But most dependent type systems don't allow you to
inspect a type representation (e.g. with a type case), so they might
get optimized out before runtime. Or they might be used internally to
implement polymorphism.

> the low-level sequence is value:type = sequence_of_one_or_more_things,
> so there is a 1 < - n, (one type for every typed sequence),
> further there is a 1 -> n fields to a type (or we can go from 1 type,
> to n typed values),

I'm not sure, are you talking about reduction sequences, from
operational semantics? Otherwise I have no idea.

> this expansion and contraction, is rather fundamental, and so saying
> that type doesn't have a return value is implying type is global and
> ambient, because it needs to be globally consistent, I believe there
> are other ways of proving global consistentcy, through isolation and
> the interpretation of the types value...

I don't know what you mean by global consistency. And I'm still not
sure I know what you mean about whether a type has a return value.

> this *allows* types to be
> reusable across phases,

Types are already reusable across phases. In a dependent type system,
all computations are constrained by the same type system, whether they
take place at compile time or runtime.

> if we take away the "return value" of 'type',
> when we absolutely *remove* the ability to pass them as arguments and
> use parameter passing as a mechanism for giving or denying authority
> to types, so the easiest analogy I can come up with is that an
> ambient/global idea of type, essentially builds a fence, where it
> seems reasonable that a wall would suffice,

I don't see why you're talking about authority, since type expressions
need to be pure computations.

> you'll most certainly have to drop subtyping though, or at move to its
> equivalent in the space where types are values...

Subtyping still makes sense when types are passed around in the same
way as values.
_______________________________________________
bitc-dev mailing list
bitc-dev@coyotos.org
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to