This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch v1.5.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.5.x by this push:
     new 0a3413f  DMLC link removal (#15708)
0a3413f is described below

commit 0a3413fd09913faa273e9c7f36356f88a11fc2aa
Author: IvyBazan <45951687+ivyba...@users.noreply.github.com>
AuthorDate: Mon Aug 5 12:36:59 2019 -0700

    DMLC link removal (#15708)
    
    * replaced julia page and remaining dmlc.ml links
    
    * Update README.md
    
    * Update index.md
    
    * Update README.md
    
    * Update README.md
---
 docs/api/julia/index.md                       | 11 ++++++-----
 docs/faq/bucketing.md                         |  2 +-
 docs/install/osx_setup.md                     |  2 +-
 docs/install/ubuntu_setup.md                  |  2 +-
 docs/install/windows_setup.md                 |  2 +-
 docs/tutorials/scala/char_lstm.md             |  2 +-
 docs/tutorials/tensorrt/inference_with_trt.md |  2 +-
 example/neural-style/end_to_end/README.md     |  2 +-
 tools/coreml/README.md                        |  2 +-
 tools/coreml/test/test_mxnet_models.py        |  4 ++--
 10 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/docs/api/julia/index.md b/docs/api/julia/index.md
index 8aa884e..20911b4 100644
--- a/docs/api/julia/index.md
+++ b/docs/api/julia/index.md
@@ -17,7 +17,7 @@
 
 # MXNet - Julia API
 
-See the [MXNet Julia Reference 
Manual](https://media.readthedocs.org/pdf/mxnet-test/latest/mxnet-test.pdf).
+See the [MXNet Julia Site](site/index.html) for examples and API reference 
docs.
 
 MXNet supports the Julia programming language. The MXNet Julia package brings 
flexible and efficient GPU
 computing and the state-of-art deep learning to Julia.
@@ -26,8 +26,9 @@ computing and the state-of-art deep learning to Julia.
 - It also enables you to construct and customize the state-of-art deep 
learning models in Julia,
   and apply them to tasks such as image classification and data science 
challenges.
 
+## Installation
+* [Ubuntu installation guide](../../install/ubuntu_setup.html)
+* Mac / Windows guides are not available (contributions welcome!)
 
-&nbsp;
-
-## Julia API Reference
-Julia documents are available at 
[http://dmlc.ml/MXNet.jl/latest/](http://dmlc.ml/MXNet.jl/latest/).
+## Docs
+To build your own copy of the [MXNet Julia Site](site/index.html), run `make 
-C julia/docs` from the MXNet source root directory. You can also generate it 
with Docker by using `dev_menu.py` from the root directory and choosing to 
build the entire website. The Julia site will be located in `api/julia/site/`. 
diff --git a/docs/faq/bucketing.md b/docs/faq/bucketing.md
index b5fb987..e73a898 100644
--- a/docs/faq/bucketing.md
+++ b/docs/faq/bucketing.md
@@ -50,7 +50,7 @@ This approach works with variable length sequences. For more 
complicated models
 
 ## Variable-length Sequence Training for Sherlock Holmes
 
-We use the [Sherlock Holmes language model 
example](https://github.com/dmlc/mxnet/tree/master/example/rnn) for this 
example. If you are not familiar with this example, see [this tutorial (in 
Julia)](http://dmlc.ml/mxnet/2015/11/15/char-lstm-in-julia.html) first.
+We use the [Sherlock Holmes language model 
example](https://github.com/dmlc/mxnet/tree/master/example/rnn) for this 
example. If you are not familiar with this example, see [this tutorial (in 
Julia)](https://mxnet.incubator.apache.org/versions/master/api/julia/site/tutorial/char-lstm/)
 first.
 
 In this example, we use a simple architecture
 consisting of a word-embedding layer
diff --git a/docs/install/osx_setup.md b/docs/install/osx_setup.md
index 6d38c46..f32780f 100644
--- a/docs/install/osx_setup.md
+++ b/docs/install/osx_setup.md
@@ -217,7 +217,7 @@ You might want to add this command to your ```~/.bashrc``` 
file. If you do, you
        Pkg.add("MXNet")
 ```
 
-For more details about installing and using MXNet with Julia, see the [MXNet 
Julia documentation](http://dmlc.ml/MXNet.jl/latest/user-guide/install/).
+For more details about installing and using MXNet with Julia, see the [MXNet 
Julia 
documentation](https://mxnet.incubator.apache.org/versions/master/api/julia/site/user-guide/install/).
 
 
 ## Install the MXNet Package for Scala
diff --git a/docs/install/ubuntu_setup.md b/docs/install/ubuntu_setup.md
index ef700b4..9540e76 100644
--- a/docs/install/ubuntu_setup.md
+++ b/docs/install/ubuntu_setup.md
@@ -328,7 +328,7 @@ You might want to add this command to your ```~/.bashrc``` 
file. If you do, you
     Pkg.add("MXNet")
 ```
 
-For more details about installing and using MXNet with Julia, see the [MXNet 
Julia documentation](http://dmlc.ml/MXNet.jl/latest/user-guide/install/).
+For more details about installing and using MXNet with Julia, see the [MXNet 
Julia 
documentation](https://mxnet.incubator.apache.org/versions/master/api/julia/site/user-guide/install/).
 <hr>
 
 
diff --git a/docs/install/windows_setup.md b/docs/install/windows_setup.md
index f256148..56128ac 100644
--- a/docs/install/windows_setup.md
+++ b/docs/install/windows_setup.md
@@ -494,7 +494,7 @@ You might want to add this command to your ```~/.bashrc``` 
file. If you do, you
        Pkg.add("MXNet")
 ```
 
-For more details about installing and using MXNet with Julia, see the [MXNet 
Julia documentation](http://dmlc.ml/MXNet.jl/latest/user-guide/install/).
+For more details about installing and using MXNet with Julia, see the [MXNet 
Julia 
documentation](https://mxnet.incubator.apache.org/versions/master/api/julia/site/user-guide/install/).
 
 
 ## Installing the MXNet Package for Scala
diff --git a/docs/tutorials/scala/char_lstm.md 
b/docs/tutorials/scala/char_lstm.md
index aca08dc..4078196 100644
--- a/docs/tutorials/scala/char_lstm.md
+++ b/docs/tutorials/scala/char_lstm.md
@@ -21,7 +21,7 @@ This tutorial shows how to train a character-level language 
model with a multila
 
 There are many documents that explain LSTM concepts. If you aren't familiar 
with LSTM, refer to the following before you proceed:
 - Christopher Olah's [Understanding LSTM blog 
post](http://colah.github.io/posts/2015-08-Understanding-LSTMs/)
-- [Training a LSTM char-rnn in Julia to Generate Random 
Sentences](http://dmlc.ml/mxnet/2015/11/15/char-lstm-in-julia.html)
+- [Training a LSTM char-rnn in Julia to Generate Random 
Sentences](https://mxnet.incubator.apache.org/versions/master/api/julia/site/tutorial/char-lstm/)
 - [Bucketing in MXNet in 
Python](https://github.com/dmlc/mxnet-notebooks/blob/master/python/tutorials/char_lstm.ipynb)
 - [Bucketing in MXNet](http://mxnet.io/faq/bucketing.html)
 
diff --git a/docs/tutorials/tensorrt/inference_with_trt.md 
b/docs/tutorials/tensorrt/inference_with_trt.md
index 409c96e..02bfd84 100644
--- a/docs/tutorials/tensorrt/inference_with_trt.md
+++ b/docs/tutorials/tensorrt/inference_with_trt.md
@@ -118,7 +118,7 @@ for i in range(0, 10000):
 end = time.time()
 print(time.process_time() - start)
 ```
-We run timing with a warmup once more, and on the same machine, run in 
**18.99s**. A 1.8x speed improvement!  Speed improvements when using libraries 
like TensorRT can come from a variety of optimizations, but in this case our 
speedups are coming from a technique known as [operator 
fusion](http://dmlc.ml/2016/11/21/fusion-and-runtime-compilation-for-nnvm-and-tinyflow.html).
+We run timing with a warmup once more, and on the same machine, run in 
**18.99s**. A 1.8x speed improvement!  Speed improvements when using libraries 
like TensorRT can come from a variety of optimizations, but in this case our 
speedups are coming from a technique known as [operator fusion](#).
 
 ## Operators and Subgraph Fusion
 
diff --git a/example/neural-style/end_to_end/README.md 
b/example/neural-style/end_to_end/README.md
index 209d98d..68d632d 100644
--- a/example/neural-style/end_to_end/README.md
+++ b/example/neural-style/end_to_end/README.md
@@ -17,7 +17,7 @@
 
 # End to End Neural Art
 
-Please refer to this 
[blog](http://dmlc.ml/mxnet/2016/06/20/end-to-end-neural-style.html) for 
details of how it is implemented.
+Please refer to this 
[blog](https://thomasdelteil.github.io/NeuralStyleTransfer_MXNet/) for details 
of how it is implemented.
 
 ## How to use
 
diff --git a/tools/coreml/README.md b/tools/coreml/README.md
index 31982ba..00fc120 100644
--- a/tools/coreml/README.md
+++ b/tools/coreml/README.md
@@ -100,7 +100,7 @@ Any MXNet model that uses the above operators can be 
converted easily. For insta
 mxnet_coreml_converter.py --model-prefix='Inception-BN' --epoch=126 
--input-shape='{"data":"3,224,224"}' --mode=classifier 
--pre-processing-arguments='{"image_input_names":"data"}' --class-labels 
synset.txt --output-file="InceptionBN.mlmodel"
 ```
 
-2. [NiN](http://data.dmlc.ml/models/imagenet/nin/)
+2. [NiN](#)
 
 ```bash
 mxnet_coreml_converter.py --model-prefix='nin' --epoch=0 
--input-shape='{"data":"3,224,224"}' --mode=classifier 
--pre-processing-arguments='{"image_input_names":"data"}' --class-labels 
synset.txt --output-file="nin.mlmodel"
diff --git a/tools/coreml/test/test_mxnet_models.py 
b/tools/coreml/test/test_mxnet_models.py
index 8dd319a..d3080d6 100644
--- a/tools/coreml/test/test_mxnet_models.py
+++ b/tools/coreml/test/test_mxnet_models.py
@@ -151,8 +151,8 @@ class ModelsTest(unittest.TestCase):
 
     def test_pred_nin(self):
         self._test_model(model_name='nin', epoch_num=0,
-                         
files=["http://data.dmlc.ml/models/imagenet/nin/nin-symbol.json";,
-                                
"http://data.dmlc.ml/models/imagenet/nin/nin-0000.params";])
+                         files=["",
+                                ""])
 
     @unittest.skip("You need to download and unzip file: "
                    "http://data.mxnet.io/models/imagenet/inception-v3.tar.gz 
in order to run this test.")

Reply via email to