matianhe3 commented on issue #17370:
URL: 
https://github.com/apache/dolphinscheduler/issues/17370#issuecomment-3116780253

   Do i need alter the worker default config ?
   
   ```
   # resource view suffixs
   
#resource.view.suffixs=txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js
   
   # resource storage type: LOCAL, HDFS, S3, OSS, GCS, ABS, OBS, COS. LOCAL 
type is default type, and it's a specific type of HDFS with 
"resource.hdfs.fs.defaultFS = file:///" configuration
   # please notice that LOCAL mode does not support reading and writing in 
distributed mode, which mean you can only use your resource in one machine, 
unless
   # use shared file mount point
   resource.storage.type=LOCAL
   # resource store on HDFS/S3 path, resource file will store to this base 
path, self configuration, please make sure the directory exists on hdfs and 
have read write permissions. "/dolphinscheduler" is recommended
   resource.storage.upload.base.path=/tmp/dolphinscheduler
   # The query interval
   resource.query.interval=10000
   
   # if resource.storage.type=HDFS, the user must have the permission to create 
directories under the HDFS root path
   resource.hdfs.root.user=hdfs
   # if resource.storage.type=S3, the value like: s3a://dolphinscheduler; if 
resource.storage.type=HDFS and namenode HA is enabled, you need to copy 
core-site.xml and hdfs-site.xml to conf dir
   resource.hdfs.fs.defaultFS=hdfs://mycluster:8020
   
   # whether to startup kerberos
   hadoop.security.authentication.startup.state=false
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to