Lunderberg commented on code in PR #16785:
URL: https://github.com/apache/tvm/pull/16785#discussion_r1539397202
##########
python/tvm/relax/frontend/nn/core.py:
##########
@@ -591,7 +609,22 @@ def wrap_nested(expr: rx.Expr, name: str) -> Union[Tensor,
Sequence[Tensor]]:
The computed result.
"""
if not isinstance(expr, rx.DataflowVar):
- expr = BlockBuilder.current().emit(expr, name)
+ block_builder = BlockBuilder.current()
+ if block_builder is None:
+ # Normalize to make sure we have valid StructInfo, but
+ # wait until we are actually building the function to
+ # flatten nested expressions.
+ #
+ # TODO(Lunderberg): Make this easier to call. Infering
+ # struct info for a nested expression should be doable in
+ # a free function, without requiring an active
+ # BlockBuilder and an active FunctionFrame.
Review Comment:
Long-term, I think it would be nice to distinguish between local struct
inference and non-local struct inference. The local inference could be applied
when a relax object is constructed, which would avoid the current two-phase
initialization of relax objects. Since this step can only perform local struct
inference, which would be applied by default, this entire conditional could be
removed.
There's some kinks that would need to be worked out first. Some of the
struct inference for tensor operations currently throw errors a bit more than I
think they should. (e.g. If `R.matmul` throws an exception if the arguments
are not `R.Tensor`. If the arguments are `R.Object`, the exception is still
thrown, even though `R.Tensor` is a subtype of `R.Object`.) These fallbacks
would probably get more exercise with local inference, as there may be less
information available.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]