aaronmarkham commented on a change in pull request #13657: update with release 
notes for 1.4.0 release
URL: https://github.com/apache/incubator-mxnet/pull/13657#discussion_r242247551
 
 

 ##########
 File path: NEWS.md
 ##########
 @@ -1,6 +1,565 @@
 MXNet Change Log
 ================
 
+## 1.4.0
+### New Features
+#### Java Inference API
+
+Model inference is run and managed by software engineers in a production 
eco-system which is built with tools and frameworks that use Java/Scala as a 
primary language. Inference on a trained model has two different use-cases:
+
+  1. Real time or Online Inference - tasks that require immediate feedback, 
such as fraud detection
+  2. Batch or Offline Inference - tasks that don't require immediate feedback, 
these are use-cases where you have massive amounts of data and want to run 
Inference or pre-compute inference results 
+Batch Inference is performed on big data platforms such as Spark using Scala 
or Java while Real time Inference is typically performed and deployed on 
popular web frameworks such as Tomcat, Netty, Jetty, etc. which use Java.  With 
this project, we want to build a new set of APIs which are Java friendly, 
compatible with Java 7+, are easy to use for inference, and lowers the entry 
barrier of consuming MXNet for production use-cases. More details can be found 
at the [Java Inference API 
document](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Java+Inference+API).
+
+#### Julia API 
+
+MXNet.jl is the Julia package of Apache MXNet. MXNet.jl brings flexible and 
efficient GPU computing and state-of-art deep learning to Julia. Some highlight 
of features include:
+
+  * Efficient tensor/matrix computation across multiple devices, including 
multiple CPUs, GPUs and distributed server nodes.
+  * Flexible symbolic manipulation to composite and construct state-of-the-art 
deep learning models.
+
+#### Control Flow Operators
+
+Today we observe more and more dynamic neural network models, especially in 
the fields of natural language processing and graph analysis. The dynamics in 
these models come from multiple sources, including:
+
+  * Models are expressed with control flow, such as conditions and loops;
+  * NDArrays in a model may have dynamic shapes, meaning the NDArrays of a 
model or some of the NDArrays have different shapes for different batches;
+  * Models may want to use more dynamic data structures, such as lists or 
dictionaries.
+It's natural to express the dynamic models in frameworks with the imperative 
programming interface (e.g., Gluon, Pytorch, TensorFlow Eager). In this 
interface, users can simply use Python control flows, or NDArrays with any 
shape at any moment, or use Python lists and dictionaries to store data as they 
want. The problem of this approach is that it highly depends on the front-end 
programming languages (mainly Python). A model implemented in one language can 
only run in the same language. A common use case is that machine learning 
scientists want to develop their models in Python but engineers who deploy the 
models usually have to use a different language (e.g., Java and C). Gluon tries 
to close the gap between the model development and deployment. Machine learning 
scientists design and implement their models in Python with the imperative 
interface and Gluon turns the implementations into symbolic implementations by 
simply invoking hybridize() for model exporting. 
+
+The goal of this project is to enhance Gluon to turn a dynamic neural network 
into a static computation graph (where the dynamic control flows are expressed 
by control flow operators) with Gluon hybridization and export them for 
deployment. More information can be found at [Optimize dynamic neural network 
models with control flow 
operators](https://cwiki.apache.org/confluence/display/MXNET/Optimize+dynamic+neural+network+models+with+control+flow+operators)
+
+#### SVRG Optimization
+
+SVRG stands for Stochastic Variance Reduced Gradient, which was first 
introduced in the paper [Accelerating Stochastic Gradient Descent using 
Predicative Variance Reduction in 
2013](https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf).
 It is an optimization technique that complements SGD. SGD is known for large 
scale optimization but it suffers from slow convergence asymptotically due to 
the inherent variance. SGD approximates the full gradient using a small batch 
of samples which introduces variance. In order to converge faster, SGD often 
needs to start with a smaller learning rate. SVRG remedies the problem by 
keeping a version of the estimated weights that is close to the optimal 
parameters and maintain average of full gradient over full pass of data. The 
average of full gradients of all data is calculated w.r.t to parameters of last 
mth epochs. It has provable guarantees for strongly convex smooth functions, 
and a more detailed proof can be found in section 3 of the paper. SVRG uses a 
different update rule: gradients w.r.t current parameters minus gradients w.r.t 
parameters from the last mth epoch, plus the average of gradients over all 
data. Key Characteristics of SVRG:
+
+  * Explicit variance reduction 
+  * Ability to use relatively large learning rate compared to SGD, which leads 
to faster convergence.
+More details can be found at [SVRG Optimization in MXNet Python 
Module](https://cwiki.apache.org/confluence/display/MXNET/Unified+integration+with+external+backend+libraries)
+
+#### Subgraph API
+
+MXNet can integrate with many different kinds of backend libraries, including 
TVM, MKLDNN, TensorRT, Intel nGraph and more. These backend in general support 
a limited number of operators, and thus running computation in a model usually 
involves in interaction between backend-supported operators and MXNet 
operators. These backend libraries share some common requirements:
 
 Review comment:
   ```suggestion
   MXNet can integrate with many different kinds of backend libraries, 
including TVM, MKLDNN, TensorRT, Intel nGraph and more. In general, these 
backends support a limited number of operators, so running computation in a 
model usually involves an interaction between backend-supported operators and 
MXNet operators. These backend libraries share some common requirements:
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to