I'm modelling dependent types in redex, and I've got a rule that looks like 
this:

 [
  (WellFormed (EnvExt x gU Gamma))
  (Check (EnvExt x gU Gamma) gu gV)
  ------------------------------------------------------------------- 
"CheckLamPi"
  (Check Gamma  (CanonicalLam x gu)  (CanonicalPi x gU gV))]

CanonicalLam and CanonicalPi each bind x inside gu and gV respectively.

The problem is, elsewhere in my model I have a type normalization judgment, 
and when it normalizes the CanonicalPi type,
it binds a fresh variable, not the original variable that was given.
This causes the "CheckLamPi" rule to never be called in typechecking, since 
it no longer 

What I really want is to treat terms as alpha equivalence classes, i.e. to 
write something like the above, and have it implicitly be implemented as 
something like this:

[
  (fresh z)
  (WellFormed (EnvExt z gU Gamma))
  (Check (EnvExt x gU Gamma) (subst z x gu) (subst z y gV) )
  ------------------------------------------------------------------- 
"CheckLamPi"
  (Check Gamma  (CanonicalLam x gu)  (CanonicalPi y gU gV))]


That is, I want the pattern matching not to check if the term and type bind 
literally the same variable, but to alpha-rename them to something that 
binds the same name.

Do I have to write a rule that explicitly does this substitution? Or is 
there a way to tell Redex that I mean the second form when I write the 
first?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to