astonzhang commented on a change in pull request #13657: update with release 
notes for 1.4.0 release
URL: https://github.com/apache/incubator-mxnet/pull/13657#discussion_r243083834
 
 

 ##########
 File path: NEWS.md
 ##########
 @@ -1,6 +1,580 @@
-MXNet Change Log
+Apache MXNet (incubating) Change Log
 ================
 
+## 1.4.0
+
+- [New Features](#new-features)
+  * [Java Inference API](#java-inference-api)
+  * [Julia API](#julia-api)
+  * [Control Flow Operators 
(experimental)](#control-flow-operators--experimental-)
+  * [SVRG Optimization](#svrg-optimization)
+  * [Subgraph API (experimental)](#subgraph-api--experimental-)
+  * [JVM Memory Management](#jvm-memory-management)
+  * [Topology-aware AllReduce 
(experimental)](#topology-aware-allreduce--experimental-)
+  * [MKLDNN backend: Graph optimization and Quantization 
(experimental)](#mkldnn-backend--graph-optimization-and-quantization--experimental-)
+    + [Graph Optimization](#graph-optimization)
+    + [Quantization](#quantization)
+- [New Operators](#new-operators)
+- [Feature improvements](#feature-improvements)
+  * [Operator](#operator)
+  * [Optimizer](#optimizer)
+  * [Sparse](#sparse)
+  * [ONNX](#onnx)
+  * [MKLDNN](#mkldnn)
+  * [Inference](#inference)
+  * [Other](#other)
+- [Frontend API updates](#frontend-api-updates)
+  * [Gluon](#gluon)
+  * [Symbol](#symbol)
+- [Language API updates](#language-api-updates)
+  * [Java](#java)
+  * [R](#r)
+  * [Scala](#scala)
+  * [Clojure](#clojure)
+  * [Perl](#perl)
+  * [Julia](#julia)
+- [Performance benchmarks and 
improvements](#performance-benchmarks-and-improvements)
+- [Bug fixes](#bug-fixes)
+- [Licensing updates](#licensing-updates)
+- [Improvements](#improvements)
+  * [Tutorial](#tutorial)
+  * [Example](#example)
+  * [Documentation](#documentation)
+  * [Website](#website)
+  * [MXNet Distributions](#mxnet-distributions)
+  * [Installation](#installation)
+  * [Build and CI](#build-and-ci)
+  * [3rd party](#3rd-party)
+    + [TVM:](#tvm-)
+    + [CUDNN:](#cudnn-)
+    + [Horovod:](#horovod-)
+- [Deprications](#deprications)
+- [Other](#other-1)
+- [How to build MXNet](#how-to-build-mxnet)
+- [List of submodules used by Apache MXNet (Incubating) and when they were 
updated 
last](#list-of-submodules-used-by-apache-mxnet--incubating--and-when-they-were-updated-last)
+### New Features
+#### Java Inference API
+
+Model inference is often managed in a production ecosystem using primarily 
Java/Scala tools and frameworks. This release seeks to alleviate the need for 
software engineers to write custom MXNet wrappers to fit their production 
environment. 
+
+Inference on a trained model has a couple of common use cases:
+
+  1. Real-time or Online Inference - tasks that require immediate feedback, 
such as fraud detection
+  2. Batch or Offline Inference - tasks that don't require immediate feedback, 
these are use cases where you have massive amounts of data and want to run 
inference or pre-compute inference results 
+Real-time Inference is often performed and deployed on popular web frameworks 
such as Tomcat, Netty, Jetty, etc., all of which use Java.
+Batch Inference is often performed on big data platforms such as Spark using 
Scala or Java.  
+
+With this project, we had the following goals:
+* Build a new set of APIs that are Java friendly, compatible with Java 7+, are 
easy to use for inference.
+* Lower the barrier to entry of consuming MXNet for production use cases.
+
+More details can be found at the [Java Inference API 
document](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Java+Inference+API).
+
+#### Julia API 
+
+MXNet.jl is the Julia package of Apache MXNet. MXNet.jl brings flexible and 
efficient GPU computing and state-of-art deep learning to Julia. Some 
highlights of features include:
+
+  * Efficient tensor/matrix computation across multiple devices, including 
multiple CPUs, GPUs and distributed server nodes.
+  * Flexible manipulation of symbolic to composite for construction of 
state-of-the-art deep learning models.
+
+#### Control Flow Operators (experimental)
+
+Today we observe more and more dynamic neural network models, especially in 
the fields of natural language processing and graph analysis. The dynamics in 
these models come from multiple sources, including:
+
+  * Models are expressed with control flow, such as conditions and loops;
+  * NDArrays in a model may have dynamic shapes, meaning the NDArrays of a 
model or some of the NDArrays have different shapes for different batches;
+  * Models may want to use more dynamic data structures, such as lists or 
dictionaries.
+It's natural to express dynamic models in frameworks with an imperative 
programming interface (e.g., Gluon, Pytorch, TensorFlow Eager). In this kind of 
interface, developers can use Python control flows, or NDArrays with any shape 
at any moment, or use Python lists and dictionaries to store data as they want. 
The problem of this approach is that it highly dependent on the originating 
front-end programming language (mainly Python). A model implemented in one 
language can only run in the same language. 
+
+A common use case is that machine learning scientists want to develop their 
models in Python, whereas engineers who deploy the models usually have to use a 
different "production" language (e.g., Java or C). Gluon tries to close the gap 
between the model development and production deployment. Machine learning 
scientists design and implement their models in Python with the imperative 
interface, and then Gluon converts the implementations from imperative to 
symbolic by invoking `hybridize()` for model exporting. 
+
+The goal of this project is to enhance Gluon to turn a dynamic neural network 
into a static computation graph. The dynamic control flows are expressed by 
control flow operators with Gluon hybridization, and these are exported for 
deployment. 
+
+More information can be found at [Optimize dynamic neural network models with 
control flow 
operators](https://cwiki.apache.org/confluence/display/MXNET/Optimize+dynamic+neural+network+models+with+control+flow+operators)
+
+#### SVRG Optimization
+
+SVRG stands for Stochastic Variance Reduced Gradient, which was first 
introduced in the paper [Accelerating Stochastic Gradient Descent using 
Predicative Variance Reduction in 
2013](https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf).
 It is an optimization technique that complements SGD. 
+
+SGD is known for large scale optimization, but it suffers from slow 
convergence asymptotically due to the inherent variance. SGD approximates the 
full gradient using a small batch of samples which introduces variance. In 
order to converge faster, SGD often needs to start with a smaller learning 
rate. 
+
+SVRG remedies the slow convergence problem by keeping a version of the 
estimated weights that is close to the optimal parameters and maintains the 
average of the full gradient over the full pass of data. The average of the 
full gradients of all data is calculated w.r.t to parameters of last mth 
epochs. It has provable guarantees for strongly convex smooth functions; a 
detailed proof can be found in section 3 of the 
[paper](https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf).
 SVRG uses a different update rule than SGD: gradients w.r.t current parameters 
minus gradients w.r.t parameters from the last mth epoch, plus the average of 
gradients over all data. 
+
+Key Characteristics of SVRG:
+
+  * Explicit variance reduction 
+  * Ability to use relatively large learning rate compared to SGD, which leads 
to faster convergence.
+More details can be found at [SVRG Optimization in MXNet Python 
Module](https://cwiki.apache.org/confluence/display/MXNET/Unified+integration+with+external+backend+libraries)
+
+#### Subgraph API (experimental)
+
+MXNet can integrate with many different kinds of backend libraries, including 
TVM, MKLDNN, TensorRT, Intel nGraph and more. In general, these backends 
support a limited number of operators, so running computation in a model 
usually involves an interaction between backend-supported operators and MXNet 
operators. These backend libraries share some common requirements:
+
+TVM , MKLDNN and nGraph use customized data formats. Interaction between these 
backends with MXNet requires data format conversion.
+TVM, MKLDNN, TensorRT and nGraph fuses operators.
+Integration with these backends should happen in the granularity of subgraphs 
instead of in the granularity of operators. To fuse operators, it's obvious 
that we need to divide a graph into subgraphs so that the operators in a 
subgraph can be fused into a single operator. To handle customized data 
formats, we should partition a computation graph into subgraphs as well. Each 
subgraph contains only TVM, MKLDNN or nGraph operators. In this way, MXNet 
converts data formats only when entering such a subgraph, and the operators 
inside a subgraph handle format conversion themselves if necessary. This makes 
interaction of TVM and MKLDNN with MXNet much easier. Neither the MXNet 
executor nor the MXNet operators need to deal with customized data formats. 
Even though invoking these libraries from MXNet requires similar steps, the 
partitioning rule and the subgraph execution of these backends can be 
different. As such, we define the following interface for backends to customize 
graph partitioning and subgraph execution inside an operator. More details can 
be found at PR 12157 and [Subgraph 
API](https://cwiki.apache.org/confluence/display/MXNET/Unified+integration+with+external+backend+libraries).
+
+#### JVM Memory Management
+
+The MXNet Scala and Java API uses native memory to manage NDArray, Symbol, 
Executor, DataIterators using MXNet's internal C APIs.  The C APIs provide 
appropriate interfaces to create, access and free these objects. MXNet Scala 
has corresponding Wrappers and APIs that have pointer references to the native 
memory. Before this project, JVM users (e.g. Scala, Clojure, or Java) of MXNet 
have to manage MXNet objects manually using the dispose pattern. There are a 
few usability problems with this approach:
+
+* Users have to track the MXNet objects manually and remember to call 
`dispose`. This is not Java idiomatic and not user friendly. Quoting a user: 
"this feels like I am writing C++ code which I stopped ages ago".
+* Leads to memory leaks if `dispose` is not called.
+* Many objects in MXNet-Scala are managed in native memory, needing to use 
`dispose` on them as well.
+* Bloated code with `dispose()` methods.
+* Hard to debug memory-leaks.
+Goals of the project are: 
+* Provide MXNet JVM users automated memory management that can release native 
memory when there are no references to JVM objects. 
+* Provide automated memory management for both GPU and CPU memory without 
performance degradation.  More details can be found here: [JVM Memory 
Management](https://cwiki.apache.org/confluence/display/MXNET/JVM+Memory+Management)
+
+#### Topology-aware AllReduce (experimental)
+For distributed training, the `Reduce` communication patterns used by NCCL and 
MXNet are not optimal for small batch sizes. The `Topology-aware AllReduce` 
approach is based on the idea of using trees to perform the `Reduce` and 
`Broadcast` operations. We can use the idea of minimum spanning trees to do a 
binary tree `Reduce` communication pattern to improve distributed training 
following this paper by Wang, Li, Edo and Smola [1]. Our strategy is to use:
+
+  * a single tree (latency-optimal for small messages) to handle `Reduce` on 
small messages
+  * multiple trees (bandwidth-optimal for large messages) to handle `Reduce` 
on large messages
+
+More details can be found here: [Topology-aware 
AllReduce](https://cwiki.apache.org/confluence/display/MXNET/Single+machine+All+Reduce+Topology-aware+Communication)
+Note: This is an experimental feature and has known problems - see 
[13341](https://github.com/apache/incubator-mxnet/issues/13341). Please help to 
contribute to improve the robustness of the feature.
+
+#### MKLDNN backend: Graph optimization and Quantization (experimental)
+
+Two advanced features, graph optimization (operator fusion) and 
reduced-precision (INT8) computation, are introduced to MKLDNN backend in this 
release ([#12530](https://github.com/apache/incubator-mxnet/pull/12530), 
[#13297](https://github.com/apache/incubator-mxnet/pull/13297), 
[#13260](https://github.com/apache/incubator-mxnet/pull/13260)).
+These features significantly boost the inference performance on CPU (up to 4X) 
for a broad range of deep learning topologies. Currently, this feature is only 
available for inference on platforms with [supported Intel 
CPUs](https://github.com/intel/mkl-dnn#system-requirements).
+
+##### Graph Optimization
+MKLDNN backend takes advantage of MXNet subgraph to implement the most of 
possible operator fusions for inference, such as Convolution + ReLU, Batch 
Normalization folding, etc. When using mxnet-mkl package, users can easily 
enable this feature by setting export MXNET_SUBGRAPH_BACKEND=MKLDNN.
+
+##### Quantization
+Performance of reduced-precision (INT8) computation is also dramatically 
improved after the graph optimization feature is applied on CPU Platforms. 
Various models are supported and can benefit from reduced-precision 
computation, including symbolic models, Gluon models and even custom models. 
Users can run most of the pre-trained models with only a few lines of commands 
and a new quantization script imagenet_gen_qsym_mkldnn.py. The observed 
accuracy loss is less than 0.5% for popular CNN networks, like ResNet-50, 
Inception-BN, MobileNet, etc.
+
+Please find detailed information and performance/accuracy numbers here: 
[MKLDNN 
README](https://github.com/apache/incubator-mxnet/blob/master/MKLDNN_README.md),
 [quantization 
README](https://github.com/apache/incubator-mxnet/tree/master/example/quantization#1)
 and [design 
proposal](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optimization+and+Quantization+based+on+subgraph+and+MKL-DNN)
+
+### New Operators 
+
+* Add trigonometric operators (#12424)
+* [MXNET-807] Support integer label type in ctc_loss operator (#12468)
+* [MXNET-876] make CachedOp a normal operator (#11641)
+* Add index_copy() operator (#12810)
+* Fix getnnz operator for CSR matrix (#12908) - issue #12872
+* [MXNET-1173] Debug operators - isfinite, isinf and isnan (#12967)
+* Add sample_like operators (#13034)
+* Add gauss err function operator (#13229)
+* [MXNET -1030] Enhanced Cosine Embedding Loss (#12750)
+* Add bytearray support back to imdecode (#12855, #12868) (#12912)
+* Add Psroipooling CPU implementation (#12738)
+
+### Feature improvements 
+#### Operator
+* [MXNET-912] Refactoring ctc loss operator (#12637)
+* Refactor L2_normalization (#13059)
+* Customized and faster `TakeOpForward` operator on CPU (#12997)
+* Allow stop of arange operator to be inferred from dims. (#12064)
+* Make check_isfinite, check_scale optional in clip_global_norm (#12042) add 
FListInputNames attribute to softmax_cross_entropy (#12701) [MXNET-867] 
Pooling1D with same padding (#12594)
+* Add support for more req patterns for bilinear sampler backward (#12386) 
[MXNET-882] Support for N-d arrays added to diag op. (#12430)
+
+#### Optimizer
+* Add a special version of Adagrad optimizer with row-wise learning rate 
(#12365)
+* Add a Python SVRGModule for performing SVRG Optimization Logic (#12376)
+
+#### Sparse
+
+* Fall back when sparse arrays are passed to MKLDNN-enabled operators (#11664)
+* Add Sparse support for logic operators (#12860)
+* Add Sparse support for take(csr, axis=0)  (#12889)
+
+#### ONNX
+
+* ONNX export - Clip operator (#12457)
+* ONNX version update from 1.2.1 to 1.3 in CI (#12633) 
+* Use modern ONNX API to load a model from file (#12777)
+* [MXNET-892] ONNX export/import: DepthToSpace, SpaceToDepth operators (#12731)
+* ONNX export: Fully connected operator w/o bias, ReduceSum, Square (#12646)
+* ONNX export/import: Selu (#12785)
+* ONNX export: Cleanup (#12878)
+* [MXNET-892] ONNX export/import: DepthToSpace, SpaceToDepth operators (#12731)
+* ONNX export: Scalar, Reshape - Set appropriate tensor type (#13067)
+* [MXNET-886] ONNX export: HardSigmoid, Less, Greater, Equal (#12812)
+
+#### MKLDNN
+
+* MKLDNN Forward FullyConnected  op cache (#11611)
+* [MXNET-753] Fallback when using non-MKLDNN supported operators (#12019)
+* MKLDNN Backward op cache (#11301)
+* Implement mkldnn convolution fusion and quantization. (#12530)
+* Improve mkldnn fallback. (#12663)
+* Update MKL-DNN dependency (#12953)
+* Update MKLML dependency (#13181)
+* [MXNET-33] Enhance mkldnn pooling to support full convention (#11047)
+
+#### Inference
+* [MXNET-910] Multithreading inference. (#12456)
+* Tweaked the copy in c_predict_api.h (#12600)
+
+#### Other
+* support for upper triangular matrices in linalg (#12904)
+* Introduce Random module / Refactor code generation (#13038)
+* [MXNET-779]Add DLPack Transformation API (#12047)
+* Draw label name next to corresponding bounding boxes when the mapping of id 
to names is specified (#9496)
+* Track epoch metric separately (#12182)
+* Set correct update on kvstore flag in dist_device_sync mode (#12786)
+
+### Frontend API updates
+
+#### Gluon
+
+* Update basic_layers.py (#13299)
+* Gluon LSTM Projection and Clipping Support (#13056)
+* Make Gluon download function to be atomic (#12572)
+* [MXNET -1004] Poisson NegativeLog Likelihood loss (#12697)
+* Add activation information for `mxnet.gluon.nn._Conv` (#12354)
+* Gluon DataLoader: avoid recursionlimit error (#12622)
+
+#### Symbol
+* Addressed dumplicate object reference issues (#13214)
+* Throw exception if MXSymbolInferShape fails (#12733)
+* Infer dtype in SymbolBlock import from input symbol (#12412)
+
+### Language API updates
+#### Java
+* [MXNET-1198] MXNet Java API (#13162)
+
+#### R
+* Refactor R Optimizers to fix memory leak - 11374
+* Add new Vignettes to the R package
+  * Char-level Language modeling - 12670
+  * Multidimensional Time series forecasting - 12664
+* Fix broken Examples and tutorials
+  * Tutorial on neural network introduction - 12117
+  * CGAN example - 12283
+  * Test classification with LSTMs - 12263
+
+#### Scala
+* Explain the details for Scala Experimental (#12348)
+* [MXNET-716] Adding Scala Inference Benchmarks (#12721)
+* [MXNET-716][MIRROR #12723] Scala Benchmark Extension pack (#12758)
+* NativeResource Management in Scala (#12647)
+* Ignore generated Scala files (#12928)
+* Use ResourceScope in Model/Trainer/FeedForward.scala (#12882)
+* [MXNET-1180] Scala Image API (#12995) 
+* Update log4j version of Scala package (#13131)
+* Review require() usages to add meaningful messages (#12570)
+* Fix Scala readme (#13082)
+
+#### Clojure
+* Introduction to Clojure-MXNet video link (#12754)
+* Improve the Clojure Package README to Make it Easier to Get Started (#12881)
+* MXNET-873 - Bring Clojure Package Inline with New DataDesc and Layout in 
Scala Package (#12387)
+* Port of Scala Image API to Clojure (#13107) 
+
+#### Perl
+* [MXNET-1026] [Perl] Sync with recent changes in Python's API (#12739)
+
+#### Julia
+* Import Julia binding (#10149), how to use is available at 
https://github.com/apache/incubator-mxnet/tree/master/julia
+
+### Performance benchmarks and improvements
+* Update mshadow for omp acceleration when nvcc is not present  (#12674)
+* [MXNET-860] Avoid implicit double conversions (#12361)
+* Add more models to benchmark_score (#12780)
+* Add resnet50-v1 to benchmark_score (#12595)
+
+### Bug fixes
+* Fix for #10920 -  increase tolerance for sparse dot (#12527)
+* [MXNET-1234] Fix shape inference problems in Activation backward (#13409)
+* Fix a bug in `where` op with 1-D input (#12325)
+* [MXNET-825] Fix CGAN R Example with MNIST dataset (#12283)
+* [MXNET-535] Fix bugs in LR Schedulers and add warmup (#11234)
+* Fix speech recognition example (#12291)
+* Fix bug in 'device' type kvstore (#12350)
+* fix search result 404s (#12414) 
+* Fix help in imread (#12420)
+* Fix render issue on < and > (#12482)
+* [MXNET-853] Fix for smooth_l1 operator scalar default value (#12284)
+* Fix subscribe links, remove disabled icons (#12474)
+* Fix broken URLs (#12508)
+* Fix/public internal header (#12374)
+* Fix lazy record io when used with dataloader and multi_worker > 0 (#12554)
+* Fix error in try/finally block for blc (#12561)
+* Add cudnn_off parameter to SpatialTransformer Op and fix the inconsistency 
between CPU & GPU code (#12557)
+* [MXNET-798] Fix the dtype cast from non float32 in Gradient computation 
(#12290)
+* Fix CodeCovs proper commit detection (#12551)
+* Add TensorRT tutorial to index and fix ToC (#12587)
+* Fixed typo in c_predict_api.cc (#12601)
+* Fix typo in profiler.h (#12599)
+* Fixed NoSuchMethodError for Jenkins Job for MBCC (#12618)
+* [MXNET-922] Fix memleak in profiler (#12499)
+* [MXNET-969] Fix buffer overflow in RNNOp (#12603) 
+*  Fixed param coercion of clojure executor/forward (#12627) (#12630)
+* Fix version dropdown behavior (#12632)
+* Fix reference to wrong function (#12644)
+* Fix the location of the tutorial of control flow operators (#12638)
+* Fix issue 12613 (#12614)
+* [MXNET-780] Fix exception handling bug (#12051)
+* Fix bug in prelu, issue 12061 (#12660) 
+* [MXNET-833] [R] Char-level RNN tutorial fix (#12670)
+* Fix static / dynamic linking of gperftools and jemalloc (#12714)
+* Fix #12672, importing numpy scalars (zero-dimensional arrays) (#12678)
+* [MXNET-623] Fixing an integer overflow bug in large NDArray (#11742)
+* Fix benchmark on control flow operators (#12693)
+* Fix regression in MKLDNN caused by PR 12019 (#12740)
+* Fixed broken link for Baidu's WARP CTC (#12774)
+* Fix CNN visualization tutorial (#12719) 
+* [MXNET-979] Add fix_beta support in BatchNorm (#12625)
+* R fix metric shape (#12776)
+* Revert [MXNET-979] Add fix_beta support in BatchNorm (#12625) (#12789)
+* Fix mismatch shapes (#12793)
+* Fixed symbols naming in RNNCell, LSTMCell, GRUCell (#12794)
+* Fixed __setattr__ method of _MXClassPropertyMetaClass (#12811)
+* Fixed regex for matching platform type in Scala Benchmark scripts (#12826)
+* Fix broken links (#12856)
+* Fix Flaky Topk (#12798)
+* [MXNET-1033] Fix a bug in MultiboxTarget GPU implementation (#12840)
+* [MXNET-1107] Fix CPUPinned unexpected behaviour (#12031)
+* Fix __all__ in optimizer/optimizer.py (#12886)
+* Fix Batch input issue with Scala Benchmark (#12848)
+* fix type inference in index_copy. (#12890)
+* Fix the paths issue for downloading script (#12913)
+* Fix indpt[0] for take(csr) (#12927)
+* Fix the bug of assigning large integer to NDArray (#12921)
+* Fix Sphinx errors for tutorials and install ToCs (#12945)
+* Fix variable name in tutorial code snippet (#13052)
+* Fix example for mxnet.nd.contrib.cond and fix typo in src/engine (#12954)
+* Fix a typo in operator guide (#13115)
+* Fix variational autoencoder example (#12880)
+* Fix problem with some OSX not handling the cast on imDecode (#13207)
+* [MXNET-953] Fix oob memory read (#12631)
+* Fix Sphinx error in ONNX file (#13251)
+* [Example] Fixing Gradcam implementation (#13196)
+* Fix train mnist for inception-bn and resnet (#13239)
+* Fix a bug in index_copy (#13218)
+* Fix Sphinx errors in box_nms (#13261)
+* Fix Sphinx errors (#13252)
+* Fix the cpp example compiler flag (#13293)
+* Made fixes to sparse.py and sparse.md (#13305)
+* [Example] Gradcam- Fixing a link (#13307)
+* Manually track num_max_thread (#12380)
+* [Issue #11912] throw mxnet exceptions when decoding invalid images. (#12999)
+* Undefined name: load_model() --> utils.load_model() (#12867)
+* Change the way NDArrayIter handle the last batch (#12545)
+* Add embedding to print_summary (#12796)
+* Allow foreach on input with 0 length (#12471)
+* [MXNET-360]auto convert str to bytes in img.imdecode when py3 (#10697)
+
 
 Review comment:
   Please add
   * Fix unpicklable transform_first on windows (#13686 )

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to