[GitHub] [incubator-tvm] tqchen commented on pull request #5756: make dtype="float" mean float32 for relay.expr.const

2020-06-10 Thread GitBox
tqchen commented on pull request #5756: URL: https://github.com/apache/incubator-tvm/pull/5756#issuecomment-642241286 Thanks @t-vi I do not disagree, that is why we felt perhaps we should go with float in the first place, how about also open a dicussion thread about this topic?

[GitHub] [incubator-tvm] tqchen commented on pull request #5756: make dtype="float" mean float32 for relay.expr.const

2020-06-10 Thread GitBox
tqchen commented on pull request #5756: URL: https://github.com/apache/incubator-tvm/pull/5756#issuecomment-642218720 How about we first fix the usage of "float" throughout the codebase, then we can send a PR to warn about "float" as dtype in the parser.

[GitHub] [incubator-tvm] tqchen commented on pull request #5756: make dtype="float" mean float32 for relay.expr.const

2020-06-10 Thread GitBox
tqchen commented on pull request #5756: URL: https://github.com/apache/incubator-tvm/pull/5756#issuecomment-642170343 Aha, that is definitely interesting. It is still worth to have a discussion about the numpy compatibility vs favors common deep learning land. That could leads to the

[GitHub] [incubator-tvm] tqchen commented on pull request #5756: make dtype="float" mean float32 for relay.expr.const

2020-06-10 Thread GitBox
tqchen commented on pull request #5756: URL: https://github.com/apache/incubator-tvm/pull/5756#issuecomment-642115243 Previously we use this approach. Personally I was in favor of default to fp32. However, one reason for us to convert to the current way is to just make sure that