This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
     new 7d4b4c0  fix markdown hyperlink syntax (#9756)
7d4b4c0 is described below

commit 7d4b4c0d6482894eef869fc2923c5083c7081fcf
Author: Rahul Huilgol <rahulhuil...@gmail.com>
AuthorDate: Wed Feb 14 20:20:46 2018 -0800

    fix markdown hyperlink syntax (#9756)
    
    Removed spacing between link and title
---
 docs/faq/security.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/faq/security.md b/docs/faq/security.md
index 09fa22b..0615acd 100644
--- a/docs/faq/security.md
+++ b/docs/faq/security.md
@@ -12,7 +12,7 @@ In particular the following threat-vectors exist when 
training using MXNet:
 It is highly recommended that the following best practices be followed when 
using MXNet:
 
 * Run MXNet with least privilege, i.e. not as root.
-* Run MXNet training jobs inside a secure and isolated environment. If you are 
using a cloud provider like Amazon AWS, running your training job inside a 
[private VPC] (https://aws.amazon.com/vpc/) is a good way to accomplish this. 
Additionally, configure your network security settings so as to only allow 
connections that the cluster nodes require.
+* Run MXNet training jobs inside a secure and isolated environment. If you are 
using a cloud provider like Amazon AWS, running your training job inside a 
[private VPC](https://aws.amazon.com/vpc/) is a good way to accomplish this. 
Additionally, configure your network security settings so as to only allow 
connections that the cluster nodes require.
 * Make sure no unauthorized actors have physical or remote access to the nodes 
participating in MXNet training.
 * During training, one can configure MXNet to periodically save model 
checkpoints. To protect these model checkpoints from unauthorized access, make 
sure the checkpoints are written out to an encrypted storage volume, and have a 
provision to delete checkpoints that are no longer needed.
 * When sharing trained models, or when receiving trained models from other 
parties, ensure that model artifacts are authenticated and integrity protected 
using cryptographic signatures, thus ensuring that the data received comes from 
trusted sources and has not been maliciously (or accidentally) modified in 
transit.
@@ -21,4 +21,4 @@ It is highly recommended that the following best practices be 
followed when usin
 # Deployment Considerations
 The following are not MXNet framework specific threats but are applicable to 
Machine Learning models in general.
 
-* When deploying high-value, proprietary models for inference, care should be 
taken to prevent an adversary from stealing the model. The research paper 
[Stealing Machine Learning Models via Prediction APIs] 
(https://arxiv.org/pdf/1609.02943.pdf) outlines experiments performed to show 
how an attacker can use a prediction API to leak the ML model or construct a 
nearly identical replica. A simple way to thwart such an attack is to not 
expose the prediction probabilities to a high degree of  [...]
+* When deploying high-value, proprietary models for inference, care should be 
taken to prevent an adversary from stealing the model. The research paper 
[Stealing Machine Learning Models via Prediction 
APIs](https://arxiv.org/pdf/1609.02943.pdf) outlines experiments performed to 
show how an attacker can use a prediction API to leak the ML model or construct 
a nearly identical replica. A simple way to thwart such an attack is to not 
expose the prediction probabilities to a high degree of p [...]

-- 
To stop receiving notification emails like this one, please contact
zhash...@apache.org.

Reply via email to