celestehorgan commented on code in PR #676:
URL: https://github.com/apache/spark-website/pull/676#discussion_r2798805096


##########
security.md:
##########
@@ -43,6 +43,19 @@ internet or untrusted networks. We recommend access within 
trusted networks (com
 private cloud environments), using restrict access to the Spark cluster with 
robust authentication, 
 authorization, and network controls.
 
+<h3>Is loading a machine learning model secure? Who is responsible for model 
security?</h3> 
+
+Loading an Apache Spark ML model is equivalent to loading and executing code 
within the Spark runtime.
+
+Spark ML models may contain serialized objects, custom transformers, 
user-defined expressions, and execution graphs. 
+During model loading, Spark deserializes these components, reconstructs the 
pipeline, and instantiates runtime objects. 
+This process can invoke executable logic on the Spark driver and executors. As 
a result, a malicious or tampered model 
+may execute arbitrary code, access sensitive data, or compromise cluster nodes.

Review Comment:
   Is it a malicious model or _any_ model that might execute arbitrary code? 
   
   Try: 
   
   
   ```suggestion
   This process can invoke executable logic on the Spark driver and executors. 
Any model, but particularly that is compromised or intentionally created with 
malicious intent, 
   might execute arbitrary code, access sensitive data, or compromise cluster 
nodes.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to