It appears this is the full extent of the stack trace. Anything prior to the
org.apache.hadoop calls are from my container where hadoop is called from.
Caused by: java.io.IOException: Call to /127.0.0.1:9001 failed on local
exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at org.apache.hadoop.mapred.$Proxy55.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:429)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:423)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:410)
at org.apache.hadoop.mapreduce.Job.<init>(Job.java:50)
at
com.allenabi.sherlock.graph.OfflineDataTool.run(OfflineDataTool.java:25)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at
com.allenabi.sherlock.graph.OfflineDataComponent.submitJob(OfflineDataComponent.java:67)
... 64 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
Alex Thieme
[email protected]
508-361-2788
In
On Feb 12, 2013, at 8:16 PM, Hemanth Yamijala <[email protected]> wrote:
> Can you please include the complete stack trace and not just the root. Also,
> have you set fs.default.name to a hdfs location like hdfs://localhost:9000 ?
>
> Thanks
> Hemanth
>
> On Wednesday, February 13, 2013, Alex Thieme wrote:
> Thanks for the prompt reply and I'm sorry I forgot to include the exception.
> My bad. I've included it below. There certainly appears to be a server
> running on localhost:9001. At least, I was able to telnet to that address.
> While in development, I'm treating the server on localhost as the remote
> server. Moving to production, there'd obviously be a different remote server
> address configured.
>
> Root Exception stack trace:
> java.io.EOFException
> at java.io.DataInputStream.readInt(DataInputStream.java:375)
> at
> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
> + 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for
> everything)
> ********************************************************************************
>
> On Feb 12, 2013, at 4:22 PM, Nitin Pawar <[email protected]> wrote:
>
>> conf.set("mapred.job.tracker", "localhost:9001");
>>
>> this means that your jobtracker is on port 9001 on localhost
>>
>> if you change it to the remote host and thats the port its running on then
>> it should work as expected
>>
>> whats the exception you are getting?
>>
>>
>> On Wed, Feb 13, 2013 at 2:41 AM, Alex Thieme <[email protected]> wrote:
>> I apologize for asking what seems to be such a basic question, but I would
>> use some help with submitting a job to a remote server.
>>
>> I have downloaded and installed hadoop locally in pseudo-distributed mode. I
>> have written some Java code to submit a job.
>>
>> Here's the org.apache.hadoop.util.Tool and
>> org.apache.hadoop.mapreduce.Mapper I've written.
>>
>> If I enable the conf.set("mapred.job.tracker", "localhost:9001") line, then
>> I get the exception included below.
>>
>> If that line is disabled, then the job is completed. However, in reviewing
>> the hadoop server administration page
>> (http://localhost:50030/jobtracker.jsp) I don't see the job as processed by
>> the server. Instead, I wonder if my Java code is simply running the
>> necessary mapper Java code, bypassing the locally installed server.
>>
>> Thanks in advance.
>>
>> Alex
>>
>> public class OfflineDataTool extends Configured implements Tool {
>>
>> public int run(final String[] args) throws Exception {
>> final Configuration conf = getConf();
>> //conf.set("mapred.job.tracker", "localhost:9001");
>>
>> final Job job = new Job(conf);
>> job.setJarByClass(getClass());
>> job.setJobName(getClass().getName());
>>
>> job.setMapperClass(OfflineDataMapper.class);
>>
>> job.setInputFormatClass(TextInputFormat.class);
>>
>> job.setMapOutputKeyClass(Text.class);
>> job.setMapOutputValueClass(Text.class);
>>
>> job.setOutputKeyClass(Text.class);
>> job.setOutputValueClass(Text.class);
>>
>> FileInputFormat.addInputPath(job, new
>> org.apache.hadoop.fs.Path(args[0]));
>>
>> final org.apache.hadoop.fs.Path output = new org.a