Dear pig-dev mailling-list,
I just wanna understand this bit quickly. Below is the code from
TestMapReduce.java. As you can see the temp file is created in local machine
but I don't understand how Hadoop MapReduce pick up the file from local file
system rather than HDFS?
PigServer pig = new PigServer(MAPREDUCE);
File tmpFile = File.createTempFile("test", ".txt");
PrintStream ps = new PrintStream(new FileOutputStream(tmpFile));
for(int i = 0; i < 10; i++) {
ps.println(i+"\t"+i);
}
ps.close();
String query = "foreach (load 'file:"+tmpFile+"') generate $0,$1;";
System.out.println(query);
pig.registerQuery("asdf_id = " + query);
try {
pig.deleteFile("frog");
} catch(Exception e) {}
Cheers,
Pi