This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new a4054cd [MXNET-607] Fix the broken reported by the new BLC (#11465)
a4054cd is described below
commit a4054cd5b20ebc12409effa398b1a32329bb91bf
Author: kpmurali <[email protected]>
AuthorDate: Thu Jun 28 20:25:14 2018 -0700
[MXNET-607] Fix the broken reported by the new BLC (#11465)
* Fixing the broken for the moved directories in ap/python and scala
imageclassifier and SSDClassifier
* Fixing the broken for the moved directories in ap/python and scala
imageclassifier and SSDClassifier
---
docs/tutorials/gluon/mnist.md | 12 ++++++------
.../infer/imageclassifier/ImageClassifierExample.scala | 6 +++---
.../infer/objectdetector/SSDClassifierExample.scala | 6 +++---
3 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/docs/tutorials/gluon/mnist.md b/docs/tutorials/gluon/mnist.md
index 3a2a2cb..5b8a98a 100644
--- a/docs/tutorials/gluon/mnist.md
+++ b/docs/tutorials/gluon/mnist.md
@@ -77,7 +77,7 @@ In an MLP, the outputs of most FC layers are fed into an
activation function, wh
The following code declares three fully connected layers with 128, 64 and 10
neurons each.
The last fully connected layer often has its hidden size equal to the number
of output classes in the dataset. Furthermore, these FC layers uses ReLU
activation for performing an element-wise ReLU transformation on the FC layer
output.
-To do this, we will use [Sequential
layer](http://mxnet.io/api/python/gluon.html#mxnet.gluon.nn.Sequential) type.
This is simply a linear stack of neural network layers. `nn.Dense` layers are
nothing but the fully connected layers we discussed above.
+To do this, we will use [Sequential
layer](http://mxnet.io/api/python/gluon/gluon.html#mxnet.gluon.nn.Sequential)
type. This is simply a linear stack of neural network layers. `nn.Dense` layers
are nothing but the fully connected layers we discussed above.
```python
# define network
@@ -90,13 +90,13 @@ with net.name_scope():
#### Initialize parameters and optimizer
-The following source code initializes all parameters received from parameter
dict using
[Xavier](http://mxnet.io/api/python/optimization.html#mxnet.initializer.Xavier)
initializer
+The following source code initializes all parameters received from parameter
dict using
[Xavier](http://mxnet.io/api/python/optimization/optimization.html#mxnet.initializer.Xavier)
initializer
to train the MLP network we defined above.
For our training, we will make use of the stochastic gradient descent (SGD)
optimizer. In particular, we'll be using mini-batch SGD. Standard SGD processes
train data one example at a time. In practice, this is very slow and one can
speed up the process by processing examples in small batches. In this case, our
batch size will be 100, which is a reasonable choice. Another parameter we
select here is the learning rate, which controls the step size the optimizer
takes in search of a soluti [...]
-We will use [Trainer](http://mxnet.io/api/python/gluon.html#trainer) class to
apply the
-[SGD
optimizer](http://mxnet.io/api/python/optimization.html#mxnet.optimizer.SGD) on
the
+We will use [Trainer](http://mxnet.io/api/python/gluon/gluon.html#trainer)
class to apply the
+[SGD
optimizer](http://mxnet.io/api/python/optimization/optimization.html#mxnet.optimizer.SGD)
on the
initialized parameters.
```python
@@ -112,7 +112,7 @@ Typically, one runs the training until convergence, which
means that we have lea
We will take following steps for training:
-- Define [Accuracy evaluation
metric](http://mxnet.io/api/python/metric.html#mxnet.metric.Accuracy) over
training data.
+- Define [Accuracy evaluation
metric](http://mxnet.io/api/python/metric/metric.html#mxnet.metric.Accuracy)
over training data.
- Loop over inputs for every epoch.
- Forward input through network to get output.
- Compute loss with output and label inside record scope.
@@ -121,7 +121,7 @@ We will take following steps for training:
Loss function takes (output, label) pairs and computes a scalar loss for each
sample in the mini-batch. The scalars measure how far each output is from the
label.
There are many predefined loss functions in gluon.loss. Here we use
-[softmax_cross_entropy_loss](http://mxnet.io/api/python/gluon.html#mxnet.gluon.loss.softmax_cross_entropy_loss)
for digit classification. We will compute loss and do backward propagation
inside
+[softmax_cross_entropy_loss](http://mxnet.io/api/python/gluon/gluon.html#mxnet.gluon.loss.softmax_cross_entropy_loss)
for digit classification. We will compute loss and do backward propagation
inside
training scope which is defined by `autograd.record()`.
```python
diff --git
a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExample.scala
b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExample.scala
index 8a57527..e886b90 100644
---
a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExample.scala
+++
b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/imageclassifier/ImageClassifierExample.scala
@@ -31,9 +31,9 @@ import scala.collection.mutable.ListBuffer
/**
* <p>
* Example inference showing usage of the Infer package on a resnet-152 model.
- * @see <a href="https://github.com/apache/incubator-mxnet/tree/m\
- * aster/scala-package/examples/src/main/scala/org/apache/mxnetexamples/in\
- * fer/imageclassifier" target="_blank">Instructions to run this example</a>
+ * @see <pre><a href="https://github.com/apache/incubator-mxnet/tree/master/s
+ cala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/im
+ ageclassifier" target="_blank">Instructions to run this example</a></pre>
*/
object ImageClassifierExample {
diff --git
a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/SSDClassifierExample.scala
b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/SSDClassifierExample.scala
index b5222e6..c9707cb 100644
---
a/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/SSDClassifierExample.scala
+++
b/scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/objectdetector/SSDClassifierExample.scala
@@ -33,9 +33,9 @@ import scala.collection.mutable.ListBuffer
* <p>
* Example single shot detector (SSD) using the Infer package
* on a ssd_resnet50_512 model.
- * @see <a href="https://github.com/apache/incubator-mxnet/tree/master/sca\
- * la-package/examples/src/main/scala/org/apache/mxnetexamples/infer/object\
- * detector" target="_blank">Instructions to run this example</a>
+ * @see <pre><a href="https://github.com/apache/incubator-mxnet/tree/master/s
+ cala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/object
+ detector" target="_blank">Instructions to run this example</a></pre>
*/
class SSDClassifierExample {
@Option(name = "--model-path-prefix", usage = "the input model directory and
prefix of the model")