Hi everyone,

I'm using Flink and/or Hadoop on my cluster, and I'm having them generate
log data in each worker node's /local folder (regular mount point). Now I
would like to process these files using Flink, but I'm not quite sure how I
could tell Flink to use each worker node's /local folder as input path,
because I'd expect Flink to look in the /local folder of the submitting
node only. Do I have to put these files into HDFS or is there a way to tell
Flink the file:///local file URI refers to worker-local data? Thanks in
advance for any hints and best

Robert

-- 
My GPG Key ID: 336E2680

Reply via email to