Dear all:
I had a problem that the namenode could not start when i ran start-dfs.sh .
Show the message as below:
FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in
namenode join
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory
Are you sure /home/hadoop/mydata/hdfs/namenode exists and has a right
permission?
On Tue, Mar 25, 2014 at 4:51 PM, haihong lu ung3...@gmail.com wrote:
Dear all:
I had a problem that the namenode could not start when i ran start-dfs.sh
. Show the message as below:
FATAL
Rather than memory problem, it was a disk problem. I made more free spaces and
it fixed
Regards,
Mahmood
On Saturday, March 22, 2014 8:58 PM, Mahmood Naderan nt_mahm...@yahoo.com
wrote:
Really stuck at this step. I have test with smaller data set and it works. Now
I am using wikipedia
Please format the namenode and then do start-dfs..
Command for namenode format:
hdfs namenode -format
Thanks Regards
Brahma Reddy Battula
From: Azuryy Yu [azury...@gmail.com]
Sent: Tuesday, March 25, 2014 2:39 PM
To: user@hadoop.apache.org
hi,
how to execute the imported hadoop source code into eclipse?
thanks
you want to import the project into eclipse or you have got the code in
eclipse and now want to do a build ?
On Tue, Mar 25, 2014 at 4:22 PM, Avinash Kujur avin...@gmail.com wrote:
hi,
how to execute the imported hadoop source code into eclipse?
thanks
--
Nitin Pawar
i have already imported the files. i want to build it.
On Tue, Mar 25, 2014 at 3:57 AM, Nitin Pawar nitinpawar...@gmail.comwrote:
you want to import the project into eclipse or you have got the code in
eclipse and now want to do a build ?
On Tue, Mar 25, 2014 at 4:22 PM, Avinash Kujur
Hi,
Hadoop version is CDH5 Beta1
Name node and Resource managers have been configured in HA mode.
After kerberos is enabled, the resource manager log shows following error
2014-03-25 22:21:06,854 WARN org.apache.hadoop.ipc.Client: Exception
encountered while connecting to the server :
I am trying to build a hive table on a MS DOS File (record ends with CRLF
character). Could some know how to do this?
Thanks
Manish
/home/hadoop/mydata/hdfs/namenode should be created when executed
start-dfs.sh. i had formatted namenode before doing start-dfs.sh, but no
effect.
On Tue, Mar 25, 2014 at 6:39 PM, Brahma Reddy Battula
brahmareddy.batt...@huawei.com wrote:
Please format the namenode and then do start-dfs..
I think you can only use \n to denote new lines in Hive. What if you
replaced the CRLF character with a \n in the data pipeline into HDFS or
with a mapreduce job after the files are in HDFS?
On Tue, Mar 25, 2014 at 6:15 PM, Manish Verma manish.lifepa...@gmail.comwrote:
I am trying to build a
Hi Andrew,
Some of the field values in this file have LF in them. I was trying to find
a way which does not require processing the file to make it conform to Unix
style file. I believe that by writing your own File Format/Splitter classes
you could use any delimiter in map reduce input file. I did
are you getting same exception..? If do format and then start-dfs.sh..
are you using the same config..? I mean, while formatting the namenode and
while starting..
can you please check or paste the configurations,exceptions in namenode log and
format log..
and hope you are formatting and
Since you know the format of your output, write something to read from it
should be very easy.
What's your outputformat? If you didn't set it, please refer to the class
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.
you can use the class
Yes.But the issue has been fixed, thanks for help.
in myscript, i used hdfs namenode -format instead of hdfs namenode
format
On Wed, Mar 26, 2014 at 12:33 PM, Brahma Reddy Battula
brahmareddy.batt...@huawei.com wrote:
are you getting same exception..? If do format and then start-dfs.sh..
15 matches
Mail list logo