Github user cnauroth commented on the issue:

    https://github.com/apache/spark/pull/14659
  
    Hello @Sherry302 .  Thank you for the patch.
    
    From the HDFS perspective, we recommend against using spaces in the value 
of the caller context.  The HDFS audit log frequently gets parsed by 
administrators using ad-hoc scripting.  Spaces in the fields can make this more 
challenging for them.  For example, if they used an awk script that parsed $NF 
expecting to find the callerContext, then the "on Spark" at the end would cause 
it to return "Spark" instead of the full caller context.
    
    May I suggest prepending "Spark" instead?  Perhaps something like this:
    
    
callerContext=Spark_JobId_0_StageID_0_stageAttemptId_0_taskID_0_attemptNumber_0


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to