) with
private IPs.
if you have records in the nodes hosts files like
public IP hosname
remove (or comment) them
Alex
On Jul 11, 2013 2:21 AM, Ben Kim benkimkim...@gmail.com wrote:
Hello Hadoop Community!
I've setup datanodes with private network by adding private hostname's to
the slaves file
Hello Hadoop Community!
I've setup datanodes with private network by adding private hostname's to
the slaves file.
but it looks like when i lookup on the webUI datenodes are registered with
public hostnames.
are they actually networking with public network?
all datanodes have eth0 with public
Hi,
This is very basic fundamental question.
Is time among all nodes needs to be synced?
I've never even thought of timing in hadoop cluster but recently
experienced my servers going out of sync with time. I know hbase requires
time to by synced due to its timestamp action. But I wonder any of
Hi I downloaded hadoop 2.0.4 and keep getting these errors from hadoop cli
and MapReduce task logs
13/05/24 14:34:17 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
i tried adding $HADOOP_HOME/lib/native/* to
using different user
what worked though is remotely running a MR using the Hadoop API
so it seems like this is only happenening in the Hadoop CLI
On Wed, Mar 13, 2013 at 1:00 AM, Ben Kim benkimkim...@gmail.com wrote:
Hi there
It looks like the job.jar created in the /user/hadoop/.staging
Hi there
It looks like the job.jar created in the /user/hadoop/.staging/ folder is
always the same no matter which jar file i give.
if i download the job.jar file then I reckon that it's a jar file I used
run a MR job few hours ago.
I'm using hadoop 1.0.3 on top of centos 6.2
anyone has any
Attached a screenshot showing the retries
On Tue, Jan 29, 2013 at 4:35 PM, Ben Kim benkimkim...@gmail.com wrote:
Hi!
I have come across the situation where i found a single reducer task
executing with multiple retries simultaneously.
Which is potent for slowing down the whole reduce process
is for
blk_4844131893883391179_3440513.
How would I delete the block? it's not showing as corrupted block on fsck.
:(
BEN
On Tue, Jan 22, 2013 at 9:28 AM, Ben Kim benkimkim...@gmail.com wrote:
Hi Varun, Thnk you for the reponse
No there doesnt seem to be any corrupted blocks in my cluster.
I
. maybe someone else can figure it out when I come back
tomorrow :)
Best regards,
Ben
On Tue, Jan 22, 2013 at 5:38 PM, Ben Kim benkimkim...@gmail.com wrote:
UPDATE:
WARN with edit log had nothing to do with the current problem.
However replica placement warnings seem to be suspicious.
Please
There isn't any WARN or ERROR in the decommissioning datanode log
Ben
On Mon, Jan 21, 2013 at 3:05 PM, varun kumar varun@gmail.com wrote:
Hi Ben,
Are there any corrupted blocks in your hadoop cluster.
Regards,
Varun Kumar
On Mon, Jan 21, 2013 at 8:22 AM, Ben Kim benkimkim...@gmail.com wrote
Hi !
I'm using hadoop-1.0.3 to run streaming jobs with map/reduce shell scripts
such as this
bin/hadoop jar ./contrib/streaming/hadoop-streaming-1.0.3.jar -input /input
-output /output/015 -mapper streaming-map.sh -reducer
streaming-reduce.sh -file /home/hadoop/streaming/streaming-map.sh -file
nevermind. the problem has been fixed.
The problem was the trailing {control-v}{control-m} character on the first
line of #!/bin/bash
(which i blame my teammate for writing the script in windows notepad !!)
On Fri, Jan 4, 2013 at 8:09 PM, Ben Kim benkimkim...@gmail.com wrote:
Hi !
I'm
regards,
Ben Kim
On Fri, Jul 20, 2012 at 10:33 AM, Harsh J ha...@cloudera.com wrote:
Dan,
Can you share your error? The plain .gz files (not .tar.gz) are natively
supported by Hadoop via its GzipCodec, and if you are facing an error, I
believe its cause of something other than compression
Hi
I'm having a similar problem so I'll continue on this mailing to describe
my issue.
I ran a MR job that takes 70GB of input and creates 1098 mappers and 100
Reducers to process tasks. (on 9 node Hadoop cluster)
but the job fails and 4 datanode dies after few min (processes are still
running,
Hi
I got my topology script from
http://wiki.apache.org/hadoop/topology_rack_awareness_scripts
I checked that the script works correctly.
But, in the hadoop cluster, all my servers get assigned to the default rack.
I'm using hadoop 1.0.3, but had experienced same problem with 1.0.0 version.
15 matches
Mail list logo