t-vi commented on a change in pull request #8709:
URL: https://github.com/apache/tvm/pull/8709#discussion_r697425658



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -148,29 +146,13 @@ def infer_type(self, node, mod=None):
 
         if node in self.types:
             return self.types[node]
-        if isinstance(node, tvm.relay.Var):
-            return node.type_annotation
-
-        tf = _TypeFinder(types=self.types)
-        new_node = tf.visit(node)
-        fn = _function.Function(list(tf.vars.values()), new_node)
-        new_mod = IRModule({"main": fn})
-        if mod is not None:
-            new_mod.update(mod)
-        new_mod = transform.RemoveUnusedFunctions()(new_mod)
-        new_mod = transform.InferType()(new_mod)
-        entry = new_mod["main"]
-        ty = entry.body.checked_type
-        self.types[node] = ty
-        return self.types[node]

Review comment:
       I think generalizing the incremental inference would be great but so 
while it has the clearly better asymptotic complexity, it is a bit of a mess to 
work with the existing type inference pass because of how it works. 
   So if other frontends do not need the incremental type inference as much as 
the PyTorch one, standardizing to the PyTorch front end's incremental one will 
introduce additional complexity and a performance hit to them.
   
   As such, it seems dubious if "cleanup" is a good reason to impose this to 
other frontends (if they benefit from incremental inference, that would be 
different).
   
   Of course, the "grand solution" (with the refactoring mentioned in the #7008 
discussion) of enabling incremental type inference in the type inference pass 
rather than the ad-hoc way in the front end would be desirable, but will be a 
lot of work.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to