Hi, I'm using the Hadoop 1.0.4 API to try to submit a job in a remote JobTracker. I created modfied the JobClient to submit the same job in different JTs. E.g, the JobClient is in my PC and it try to submit the same Job in 2 JTs at different sites in Amazon EC2. When I'm launching the Job, in the setup phase, the JobClient is trying to submit split file info into the remote JT. This is the method of the JobClient that I've the problem:
public static void createSplitFiles(Path jobSubmitDir, Configuration conf, FileSystem fs, org.apache.hadoop.mapred.InputSplit[] splits) throws IOException { FSDataOutputStream out = createFile(fs, JobSubmissionFiles.getJobSplitFile(jobSubmitDir), conf); SplitMetaInfo[] info = writeOldSplits(splits, out, conf); out.close(); writeJobSplitMetaInfo(fs,JobSubmissionFiles.getJobSplitMetaFile(jobSubmitDir), new FsPermission(JobSubmissionFiles.JOB_FILE_PERMISSION), splitVersion, info); } 1 - The FSDataOutputStream hangs in the out.close() instruction. Why it hangs? What should I do to solve this? -- Best regards,