thanks, Shekhar. I'm unfamiliar with Flume, but I will look into it later
2014-03-02 15:36 GMT+08:00 Shekhar Sharma shekhar2...@gmail.com:
Don't you think using flume would be easier. Use hdfs sink and use a
property to roll out the log file every hour.
By doing this way you use a single
Actually it does show in Ambari, but the only time I've seen it is when adding
a new host it shows in the other registered hosts list.
john
From: John Lilley [mailto:john.lil...@redpoint.net]
Sent: Saturday, March 01, 2014 12:32 PM
To: user@hadoop.apache.org
Subject: how to remove a dead node?
Forwarding message to hadoop list as well for any help. Appreciate any help
**
Cheers !!!
Siddharth Tiwari
Have a refreshing day !!!
Every duty is holy, and devotion to duty is the highest form of worship of
God.”
Maybe other people will try to limit me but I don't
Hi Team,
I have 10 disks over which I am running my HDFS. Out of this on disk5 I have my
hadoop.tmp.dir configured. I see that on this disk I have huge IO when I run my
jobs compared to other disks. Can you guide my to the standards to follow so
that this IO can be distributed across to other
Hi,
Set below configuration in your word count job.
Configuration config= new Configuration();
config.set(fs.default.namehttp://fs.default.name,hdfs://xyz-hostname:9000);
config.set(mapred.job.tracker,xyz-hostname:9001);
config.set(yarn.application.classpath ,$HADOOP_CONF_DIR,
One more configuration to be added
config.set(mapreduce.framework.namehttp://mapreduce.framework.name,yarn);
Thanks
Rohith
From: Rohith Sharma K S [mailto:rohithsharm...@huawei.com]
Sent: 03 March 2014 09:02
To: user@hadoop.apache.org
Subject: RE: Problem in Submitting a Map-Reduce Job to
Hi team,
What does the following error signify ?
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException:
Hive Runtime Error while processing row (tag=1)
{key:{joinkey0:},value:{_col2:92,_col11:-60-01-21,00,_col12:-03-07-04,00},alias:1}
at
Seems to be you had started cluster with default values for the following two
properties and configured for only hadoop.tmp.dir .
dfs.datanode.data.dir --- file://${hadoop.tmp.dir}/dfs/data (Default value)
Determines where on the local filesystem an DFS data node should store its
Hi List,
I'm confusing by hadoop.tmp.dir currently because its default value
/tmp/hadoop-${user.name} always means a directory in tmpfs in Linux.
So after the name node machine reboot, it gone away and then name node
fail to start.
I found this was reported here.
Hi Brahma,
No I havnt, I have put comma separated list of disks here dfs.datanode.data.dir
. Have put disk5 for hadoop.tmp.dir. My Q is, should we set up hadoop.tmp.dir
or not ? if yes what should be standards around.
**
Cheers !!!
Siddharth Tiwari
Have a refreshing
You can use any directory you like beside permissions are right.
*Warm Regards_**∞_*
* Shashwat Shriparv*
[image:
http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9[image:
https://twitter.com/shriparv]
I use Hadoop 2.2 and I want to run MapReduce web UI,so I visit following url:
http://172.11.12.6:50030/jobtracker.jsp
Unable to connect Firefox can't establish a connection to the server at
172.11.12.6:50030.
Where is wrong?
In Hadoop 2.2, there's no actual jobtracker running, you may want to access
the Resource Manager Web UI: http://172.11.12.6:8088/
Regards,
*Stanley Shi,*
On Mon, Mar 3, 2014 at 2:07 PM, EdwardKing zhan...@neusoft.com wrote:
I use Hadoop 2.2 and I want to run MapReduce web UI,so I visit
On Mon, Mar 03, 2014 at 11:25:59AM +0530, shashwat shriparv wrote:
You can use any directory you like beside permissions are right.
I mean if it's better if we change the default hadoop.tmp.dir? Because it
can not work cross reboot in default Linux environment.
--
Thanks,
Chengwei
Warm
Ya its always better to change the temp dir path in hadoop, as it will
prevent deletion of file while the server reboots.
*Warm Regards_**∞_*
* Shashwat Shriparv*
[image:
http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9[image:
What should be the standard around setting up the hadoop.tmp.dir parameter.
As I know hadoop.tmp.dir will be used for follow properites, If you
are configuring following properties,then you no need to configure
this one..
MapReduce:
mapreduce.cluster.local.dir
I am new in hadoop and my project is how to contribute hadoop...i read that
link but not getting how can i fix the issues... firstly i want to read and
the hadoop code..after that i can try to solve the issues...plzzz suggest
how can i build hadoop development on my system..there are so many file
Hi,
You may start here:
http://wiki.apache.org/hadoop/HowToContribute
Regards,
Ricardo Boaretto.
On Mar 3, 2014 4:03 AM, Banty Sharma bantysharma...@gmail.com wrote:
I am new in hadoop and my project is how to contribute hadoop...i read
that link but not getting how can i fix the issues...
18 matches
Mail list logo