srochel commented on a change in pull request #13657: update with release notes for 1.4.0 release URL: https://github.com/apache/incubator-mxnet/pull/13657#discussion_r242748000
########## File path: NEWS.md ########## @@ -1,6 +1,565 @@ MXNet Change Log ================ +## 1.4.0 +### New Features +#### Java Inference API + +Model inference is run and managed by software engineers in a production eco-system which is built with tools and frameworks that use Java/Scala as a primary language. Inference on a trained model has two different use-cases: + + 1. Real time or Online Inference - tasks that require immediate feedback, such as fraud detection + 2. Batch or Offline Inference - tasks that don't require immediate feedback, these are use-cases where you have massive amounts of data and want to run Inference or pre-compute inference results +Batch Inference is performed on big data platforms such as Spark using Scala or Java while Real time Inference is typically performed and deployed on popular web frameworks such as Tomcat, Netty, Jetty, etc. which use Java. With this project, we want to build a new set of APIs which are Java friendly, compatible with Java 7+, are easy to use for inference, and lowers the entry barrier of consuming MXNet for production use-cases. More details can be found at the [Java Inference API document](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Java+Inference+API). + +#### Julia API + +MXNet.jl is the Julia package of Apache MXNet. MXNet.jl brings flexible and efficient GPU computing and state-of-art deep learning to Julia. Some highlight of features include: + + * Efficient tensor/matrix computation across multiple devices, including multiple CPUs, GPUs and distributed server nodes. + * Flexible symbolic manipulation to composite and construct state-of-the-art deep learning models. + +#### Control Flow Operators + +Today we observe more and more dynamic neural network models, especially in the fields of natural language processing and graph analysis. The dynamics in these models come from multiple sources, including: + + * Models are expressed with control flow, such as conditions and loops; + * NDArrays in a model may have dynamic shapes, meaning the NDArrays of a model or some of the NDArrays have different shapes for different batches; + * Models may want to use more dynamic data structures, such as lists or dictionaries. +It's natural to express the dynamic models in frameworks with the imperative programming interface (e.g., Gluon, Pytorch, TensorFlow Eager). In this interface, users can simply use Python control flows, or NDArrays with any shape at any moment, or use Python lists and dictionaries to store data as they want. The problem of this approach is that it highly depends on the front-end programming languages (mainly Python). A model implemented in one language can only run in the same language. A common use case is that machine learning scientists want to develop their models in Python but engineers who deploy the models usually have to use a different language (e.g., Java and C). Gluon tries to close the gap between the model development and deployment. Machine learning scientists design and implement their models in Python with the imperative interface and Gluon turns the implementations into symbolic implementations by simply invoking hybridize() for model exporting. + +The goal of this project is to enhance Gluon to turn a dynamic neural network into a static computation graph (where the dynamic control flows are expressed by control flow operators) with Gluon hybridization and export them for deployment. More information can be found at [Optimize dynamic neural network models with control flow operators](https://cwiki.apache.org/confluence/display/MXNET/Optimize+dynamic+neural+network+models+with+control+flow+operators) + +#### SVRG Optimization + +SVRG stands for Stochastic Variance Reduced Gradient, which was first introduced in the paper [Accelerating Stochastic Gradient Descent using Predicative Variance Reduction in 2013](https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf). It is an optimization technique that complements SGD. SGD is known for large scale optimization but it suffers from slow convergence asymptotically due to the inherent variance. SGD approximates the full gradient using a small batch of samples which introduces variance. In order to converge faster, SGD often needs to start with a smaller learning rate. SVRG remedies the problem by keeping a version of the estimated weights that is close to the optimal parameters and maintain average of full gradient over full pass of data. The average of full gradients of all data is calculated w.r.t to parameters of last mth epochs. It has provable guarantees for strongly convex smooth functions, and a more detailed proof can be found in section 3 of the paper. SVRG uses a different update rule: gradients w.r.t current parameters minus gradients w.r.t parameters from the last mth epoch, plus the average of gradients over all data. Key Characteristics of SVRG: + + * Explicit variance reduction + * Ability to use relatively large learning rate compared to SGD, which leads to faster convergence. +More details can be found at [SVRG Optimization in MXNet Python Module](https://cwiki.apache.org/confluence/display/MXNET/Unified+integration+with+external+backend+libraries) + +#### Subgraph API + +MXNet can integrate with many different kinds of backend libraries, including TVM, MKLDNN, TensorRT, Intel nGraph and more. These backend in general support a limited number of operators, and thus running computation in a model usually involves in interaction between backend-supported operators and MXNet operators. These backend libraries share some common requirements: + +TVM , MKLDNN and nGraph uses customized data formats. Interaction between these backends with MXNet requires data format conversion. +TVM, MKLDNN, TensorRT and nGraph fuses operators. +Integration with these backends should happen in the granularity of subgraphs instead of in the granularity of operators. To fuse operators, it's obvious that we need to divide a graph into subgraphs so that the operators in a subgraph can be fused into a single operator. To handle customized data formats, we should partition a computation graph into subgraphs as well. Each subgraph contains only TVM, MKLDNN or ngraph operators. In this way, MXNet converts data formats only when entering such a subgraph and the operators inside a subgraph handle format conversion themselves if necessary. This makes interaction of TVM and MKLDNN with MXNet much easier. Neither the MXNet executor nor the MXNet operators need to deal with customized data formats. Even though invoking these libraries from MXNet requires similar steps, the partitioning rule and the subgraph execution of these backends can be different. As such, we define the following interface for backends to customize graph partitioning and subgraph execution inside an operator. More details can be found at PR 12157 and [Subgraph API](https://cwiki.apache.org/confluence/display/MXNET/Unified+integration+with+external+backend+libraries). + +#### MXNet nGraph integration + +As the diversity of deep learning hardware accelerators increase, it is important to have an efficient abstraction layer so developers can avoid having to enable each accelerator/compute separately. Intel nGraph enables that vision. The primary goal of this integration is to provide a seamless development and deployment experience to data scientists and machine learning engineers to leverage Intel nGraph ecosystem with MXNet. As Subgraph API seamlessly integrates with MXNet frontend API, users should just be able to use or switch nGraph backend with any existing MXNet scripts, models and deployments using the symbolic interface. For more details see [MXNet nGraph integration using subgraph backend interface](https://cwiki.apache.org/confluence/display/MXNET/MXNet+nGraph+integration+using+subgraph+backend+interface) + +#### JVM Memory Management + +MXNet Scala and Java API uses native memory to manage NDArray, Symbol, Executor, DataIterators using the MXNet c_api. C APIs provide appropriate interfaces to create, access and free these objects MXNet Scala has corresponding Wrappers and APIs which have pointer references to the native memory. Before this project, JVM users(Scala/Clojure/Java..) of Apache MXNet have to manage MXNet objects manually using the dispose pattern, there are a few usability problems with this approach: + +Users have to track the MXNet objects manually and remember to call dispose, this is not Java Idiomatic and not user-friendly, quoting a user "this feels like I am writing C++ code which I stopped ages ago" +Leads to memory leaks if dispose is not called. +Many Objects in MXNet-Scala are managed in native memory, needing to use dispose on them as well. +bloated code with dispose() methods. +hard to debug memory-leaks +Goals of the project are to provide MXNet JVM Users automated memory management which can release native memory when there are no references to JVM objects, to be able to manage both GPU and CPU Memory automatically without performance degradation with automated memory management. More details can be found here: [JVM Memory Management](https://cwiki.apache.org/confluence/display/MXNET/JVM+Memory+Management) + +#### Topology-aware AllReduce +For distributed training, the ring Reduce communication pattern used by NCCL and Parameter server Reduce currently used in MXNet are not optimal for small batch sizes on p3.16xlarge instances with 8 GPUs. The approach is based on the idea of using trees to perform the Reduce and Broadcast. We can use the idea of minimum spanning trees to do a binary tree Reduce communication pattern to improve it following this paper by Wang, Li, Edo and Smola [1]. Our strategy will be to use: + + * a single tree (latency-optimal for small messages) to handle Reduce on small messages + * multiple trees (bandwidth-optimal for large messages) to handle large messages. + +More details can be found here: [Topology-aware AllReduce](https://cwiki.apache.org/confluence/display/MXNET/Single+machine+All+Reduce+Topology-aware+Communication) + +### New Operators + +* Add trigonometric operators (#12424) +* [MXNET-807] Support integer label type in ctc_loss operator (#12468) +* [MXNET-876] make CachedOp a normal operator (#11641) +* Add index_copy() operator (#12810) +* getnnz operator for CSR matrix (#12908) +* [MXNET-1173] Debug operators - isfinite, isinf and isnan (#12967) +* sample_like operators (#13034) +* Add gauss err function operator (#13229) +* [MXNET -1030] Cosine Embedding Loss (#12750) +* Add bytearray support back to imdecode (#12855, #12868) (#12912) +* Add Psroipooling CPU implementation (#12738) + +### Feature improvements +#### Operator +* [MXNET-912] Refactoring ctc loss operator (#12637) +* Refactor L2_normalization (#13059) +* customized take forward for CPU (#12997) +* Allow stop of arange to be inferred from dims. (#12064) +* Make check_isfinite, check_scale optional in clip_global_norm (#12042) add FListInputNames attribute to softmax_cross_entropy (#12701) [MXNET-867] Pooling1D with same padding (#12594) +* Add support for more req patterns for bilinear sampler backward (#12386) [MXNET-882] Support for N-d arrays added to diag op. (#12430) + +#### Optimizer +* Adagrad optimizer with row-wise learning rate (#12365) +* Adding python SVRGModule for performing SVRG Optimization Logic (#12376) + +#### Sparse + +* Fall back when sparse arrays are passed to MKLDNN-enabled operators (#11664) +* further bump up tolerance for sparse dot (#12527) +* Sparse support for logic ops (#12860) +* sparse support for take(csr, axis=0) (#12889) + +#### ONNX + +* ONNX export - Clip operator (#12457) +* Onnx version update from 1.2.1 to 1.3 in CI (#12633) +* Use modern onnx API to load model from file (#12777) +* [MXNET-892] ONNX export/import: DepthToSpace, SpaceToDepth operators (#12731) +* ONNX export: Fully connected operator w/o bias, ReduceSum, Square (#12646) +* ONNX export/import: Selu (#12785) +* ONNX export: Cleanup (#12878) +* Added operators: Selu, DepthToSpace, SpaceToDepth, HardSigmoid, Logical operators + +#### MKLDNN + +* MKLDNN Forward FullyConnected op cache (#11611) +* [MXNET-753] Fallback when using non-MKLDNN supported operators (#12019) +* MKLDNN Backward op cache (#11301) +* Implement mkldnn convolution fusion and quantization. (#12530) +* Improve mkldnn fallback. (#12663) +* Update MKL-DNN dependency (#12953) +* Update MKLML dependency (#13181) +* [MXNET-33] Enhance mkldnn pooling to support full convention (#11047) + +#### Inference +* [MXNET-910] Multithreading inference. (#12456) +* Tweaked the copy in c_predict_api.h (#12600) + +#### Other +* support for upper triangular matrices in linalg (#12904) +* [MXNET-918] Introduce Random module / Refact code generation (#13038) +* [MXNET-779]Add DLPack Transformation API (#12047) +* Draw labels name (#9496) +* Change the way NDArrayIter handle the last batch (#12285) +* Revert Change the way NDArrayIter handle the last batch (#12537) +* Track epoch metric separately (#12182) +* Set correct update on kvstore flag in dist_device_sync mode (#12786) + +### Frontend API updates + +#### Gluon + +* Update basic_layers.py (#13299) +* Gluon LSTM Projection and Clipping Support (#13056) +* Make Gluon download function to be atomic (#12572) +* [MXNET -1004] Poisson NegativeLog Likelihood loss (#12697) +* add activation information for mxnet.gluon.nn._Conv (#12354) +* Gluon DataLoader: avoid recursionlimit error (#12622) + +#### Symbol +* Addressed dumplicate object reference issues (#13214) +* Throw exception if MXSymbolInferShape fails. (#12733) +* Infer dtype in SymbolBlock import from input symbol (#12412) + +### Language API updates +#### Java +* [MXNET-1198] MXNet Java API (#13162) + +#### R +* Refactor R Optimizers to fix memory leak - 11374 +* Add new Vignettes to the R package + * Char-level Language modeling - 12670 + * Multidimensional Time series forecasting - 12664 +* Fix broken Examples and tutorials + * Tutorial on neural network introduction - 12117 + * CGAN example - 12283 + * Test classification with LSTMs - 12263 + +#### Scala +* explain the details for Scala Experimental (#12348) +* MXNET-873 - Bring Clojure Package Inline with New DataDesc and Layout in Scala Package (#12387) +* [MXNET-716] Adding Scala Inference Benchmarks (#12721) +* [MXNET-716][MIRROR #12723] Scala Benchmark Extension pack (#12758) +* NativeResource Management in Scala (#12647) +* Ignore generated scala files. (#12928) +* use ResourceScope in Model/Trainer/FeedForward.scala (#12882) +* [MXNET-1180] Scala Image API (#12995) +* ONNX export: Scalar, Reshape - Set appropriate tensor type (#13067) +* Port of scala Image API to clojure (#13107) +* update log4j version of Scala package (#13131) +* review require() usages to add meaningful messages. (#12570) + +#### Clojure +* Introduction to Clojure-MXNet video link. (#12754) +* Improve the Clojure Package README to Make it Easier to Get Started (#12881) + +#### Perl +* [MXNET-1026] [Perl] Sync with recent changes in Python's API (#12739) + +#### Julia +* import Julia binding + +### Performance improvements +* update mshadow for omp acceleration when nvcc is not present (#12674) +* [MXNET-860] Avoid implicit double conversions (#12361) + +### Bug fixes +* Fix a bug in where op with 1-D input (#12325) +* [MXNET-825] Fix CGAN R Example with MNIST dataset (#12283) +* [MXNET-535] Fix bugs in LR Schedulers and add warmup (#11234) +* Fix speech recognition example (#12291) +* fix bug in 'device' type kvstore (#12350) +* fix search result 404s (#12414) +* fix help in imread (#12420) +* fix render issue on < and > (#12482) +* [MXNET-853] Fix for smooth_l1 operator scalar default value (#12284) +* fix subscribe links, remove disabled icons (#12474) +* Fix broken URLs (#12508) +* Fix/public internal header (#12374) +* Fix lazy record io when used with dataloader and multi_worker > 0 (#12554) +* Fix error in try/finally block for blc (#12561) +* add cudnn_off parameter to SpatialTransformer Op and fix the inconsistency between CPU & GPU code (#12557) +* [MXNET-798] Fix the dtype cast from non float32 in Gradient computation (#12290) +* Fix CodeCovs proper commit detection (#12551) +* add TensorRT tutorial to index and fix ToC (#12587) +* Fixed typo in c_predict_api.cc (#12601) +* Fix typo in profiler.h (#12599) +* Fixed NoSuchMethodError for Jenkins Job for MBCC (#12618) +* [MXNET-922] Fix memleak in profiler (#12499) +* [MXNET-969] Fix buffer overflow in RNNOp (#12603) +* Fixed param coercion of clojure executor/forward (#12627) (#12630) +* Fix version dropdown behavior (#12632) +* Fix reference to wrong function (#12644) +* Fix the location of the tutorial of control flow operators (#12638) +* fix bug, issue 12613 (#12614) Review comment: clarified ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
