[
https://issues.apache.org/jira/browse/MXNET-376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16535149#comment-16535149
]
Anirudh Acharya edited comment on MXNET-376 at 7/6/18 5:28 PM:
---------------------------------------------------------------
Currently hardmax is accomplished using existing mxnet operators. For example -
{code:java}
# Compute Hardmax with axis=1
x = np.random.rand(2,3,4)
xn = mx.nd.array(x)
xn_r = mx.nd.reshape(xn, shape=(2,12))
xn_e = mx.nd.eye(xn_r.shape[1], dtype=x.dtype)[mx.nd.argmax(xn_r, axis=1)]
hardmax_output = mx.nd.reshape(xn_e, shape=xn.shape)
print(hardmax_output)
{code}
But a direct hardmax implementation would be more convenient and useful for
users who would want to build their networks with mxnet as opposed to just
importing from ONNX.
was (Author: anirudhacharya):
Currently hardmax is accomplished using existing mxnet operators. For example -
{code:java}
# Compute Hardmax with axis=1
x = np.random.rand(2,3,4)
xn = mx.nd.array(x)
xn_r = mx.nd.reshape(xn, shape=(2,12))
xn_e = mx.nd.eye(xn_r.shape[1], dtype=x.dtype)[mx.nd.argmax(xn_r, axis=1)]
hardmax_output = mx.nd.reshape(xn_e, shape=xn.shape)
print(hardmax_output)
{code}
But a direct hardmax implementation would be more convenient and useful for
users who would want to build their networks with mxnet.
> Hardmax
> -------
>
> Key: MXNET-376
> URL: https://issues.apache.org/jira/browse/MXNET-376
> Project: Apache MXNet
> Issue Type: Sub-task
> Reporter: Hao Jin
> Priority: Major
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]