zhengchenyu opened a new issue, #1109:
URL: https://github.com/apache/incubator-uniffle/issues/1109

   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   
   
   ### Search before asking
   
   - [X] I have searched in the 
[issues](https://github.com/apache/incubator-uniffle/issues?q=is%3Aissue) and 
found no similar issues.
   
   
   ### Describe the bug
   
   When we use remote storage, we will flush data to hdfs, but the user of the 
storage is appid. See the storage:
   
   ```
   [yarn@host dir]$ hdfs dfs -ls /uniffle-rss
   Found 1 items
   drwxrwx--x   - application_1691045129773_0174 yarn          0 2023-08-04 
16:52 /uniffle-rss/appattempt_1691045129773_0174_000001
   ```
   
   It seems reasonable that regard applicationid as user, because it is more 
safe. But we know the max number of hadoop user is 16777215, it is limited. 
Even though the related remote storage directory is deleted, this user string 
won't be delete from SerialNumberMap util restart namenode.
   
   
   ### Affects Version(s)
   
   master
   
   ### Uniffle Server Log Output
   
   _No response_
   
   ### Uniffle Engine Log Output
   
   _No response_
   
   ### Uniffle Server Configurations
   
   _No response_
   
   ### Uniffle Engine Configurations
   
   _No response_
   
   ### Additional context
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [X] Yes I am willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to