I think you need to check the RM and NM log to find the detail error message
On Thu, Apr 17, 2014 at 2:30 PM, EdwardKing wrote:
> *hive.log is follows:*
>
> 014-04-16 23:11:59,214 WARN common.LogUtils
> (LogUtils.java:logConfigLocation(142)) - hive-site.xml not found on
> CLASSPATH
> 2014-04-
hive.log is follows:
014-04-16 23:11:59,214 WARN common.LogUtils
(LogUtils.java:logConfigLocation(142)) - hive-site.xml not found on CLASSPATH
2014-04-16 23:11:59,348 WARN conf.Configuration
(Configuration.java:loadProperty(2172)) -
org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@1ab0
Maybe /tmp/$username/hive.log, you can check the the parameter
'hive.log.dir' in hive-log4j.properties
On Thu, Apr 17, 2014 at 1:18 PM, EdwardKing wrote:
> Where is hive.log? Thanks.
>
> - Original Message -
> *From:* Shengjun Xin
> *To:* user@hadoop.apache.org
> *Sent:* Thursday, Ap
Where is hive.log? Thanks.
- Original Message -
From: Shengjun Xin
To: user@hadoop.apache.org
Sent: Thursday, April 17, 2014 12:42 PM
Subject: Re: question about hive under hadoop
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing wrote:
> I use hive-0.11.0 under hadoop 2.2.0, like follows:
> [hadoop@node1 software]$ hive
> 14/04/16 19:11:02 INFO Configuration.deprecation:
> mapred.input.dir.recursive is de
> On Apr 16, 2014, at 9:16 PM, Kim Chew wrote:
>
> Vinod, I am confused here.
>
> So could you please explain what actually happened under the hood if
> "mapreduce.framework.name" is set to "classic" on the cluster side? Or it is
> supposed to be set to "yarn" in the first place?
>
> Thank
Maybe a configuration problem, what's the content of configuration?
On Thu, Apr 17, 2014 at 10:40 AM, 易剑 wrote:
> *How to solve the following problem?*
>
>
> *hadoop-hadoop-secondarynamenode-Tencent_VM_39_166_sles10_64.out:*
> Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library
>
Vinod, I am confused here.
So could you please explain what actually happened under the hood if "
mapreduce.framework.name" is set to "classic" on the cluster side? Or it is
supposed to be set to "yarn" in the first place?
Thanks.
Kim
On Wed, Apr 16, 2014 at 7:06 PM, Vinod Kumar Vavilapalli w
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.input.dir.recursive is
deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.ma
*How to solve the following problem?*
*hadoop-hadoop-secondarynamenode-Tencent_VM_39_166_sles10_64.out:*
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library
/data/hadoop/hadoop-2.4.0/lib/native/libhadoop.so.1.0.0 which might have
disabled stack guard. The VM will try to fix the sta
Occurs when uploading.
Logs are generated in any situation?
It is dangerous problem?
* hadoop version 1.1.2
* namenode log
2014-04-17 09:30:34,280 INFO namenode.FSNamesystem
(FSNamesystem.java:commitBlockSynchronization(2374)) -
commitBlockSynchronization(lastblock=blk_-8030112303
You cannot run JobTracker/TaskTracker in Hadoop 2. It's neither supported nor
even possible.
+Vinod
On Apr 16, 2014, at 2:27 PM, Kim Chew wrote:
> I have a cluster running Hadoop 2 but it is not running YARN, i.e.
> "mapreduce.framework.name" is set to "classic" therefore the ResourceManager
Please could anyone respond to my query above:
Why i am getting this warning?
14/04/16 13:08:37 WARN mapreduce.JobSubmitter: Hadoop command-line option
parsing not performed. Implement the Tool interface and execute your
application with ToolRunner to remedy this.
Because of this my libjar is n
I have a cluster running Hadoop 2 but it is not running YARN, i.e. "
mapreduce.framework.name" is set to "classic" therefore the ResourceManager
is not running.
On the Client side, I want to submit a job compiled with Hadoop-1.1.1 to
the above cluster. Here how my Hadoop-1.1.1 mapred-site.xml look
Thanks Rahman. This problem can be boiled down to how to submit a job
compiled with Hadoop-1.1.1 remotely to a Hadoop 2 cluster that has not
turned on YARN. I will open another thread for it.
Kim
On Wed, Apr 16, 2014 at 1:30 PM, Abdelrahman Shettia <
ashet...@hortonworks.com> wrote:
> Hi Kim,
>
Hi Kim,
Correction, the command is :
ps aux | grep -i resource
Also, I notice that you are using some configurations of Jobtracker, which
is not going to be used in for Hadoop 2.x. Here is a sample for all of the
RM configurations from my sandbox one node machine:
mapred-site.xml-
mapred
Hi Kim,
You can try to grep on the RM java process by running the following
command:
ps aux | grep
On Wed, Apr 16, 2014 at 10:31 AM, Kim Chew wrote:
> Thanks Rahman, I have mixed things up a little bit in my mapred-site.xml
> so it tried to run the job locally. Now I am running into the pro
Yes, thank you Stanley !
Ashwin
On Tue, Apr 15, 2014 at 8:01 PM, Stanley Shi wrote:
> Is this what you are looking for?
>
> http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/CommandsManual.html#daemonlog
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Wed, Apr 16, 2014 at 2:06 AM
Thanks Rahman, I have mixed things up a little bit in my mapred-site.xml so
it tried to run the job locally. Now I am running into the problem that
Rahul has, I am unable to to connect to the ResourceManager.
The setup of my targeted cluster runs MR1 instead of YARN, hence the "
mapreduce.framewor
Hi Kim,
It looks like it is pointing to hdfs location. Can you create the hdfs dir and
put the jar there? Hope this helps
Thanks,
Rahman
On Apr 16, 2014, at 8:39 AM, Rahul Singh wrote:
> any help...all are welcome?
>
>
> On Wed, Apr 16, 2014 at 1:13 PM, Rahul Singh
> wrote:
> Hi,
> I am
any help...all are welcome?
On Wed, Apr 16, 2014 at 1:13 PM, Rahul Singh wrote:
> Hi,
> I am running with the following command but still, jar is not available
> to mapper and reducers.
>
> hadoop jar /home/hduser/workspace/Minerva.jar my.search.Minerva
> /user/hduser/input_minerva_actual /user
Hi,
Can somebody help me with how Hive table is loaded. Can Map Reduce jobs be
used to load Hive tables?
Regards
Shashi
Found the root reason.
It is because of the nested distinct operation relies on the RAM to calculate
unique values.
As described here:
http://stackoverflow.com/questions/10732456/how-to-optimize-a-group-by-statement-in-pig-latin
Thanks,
Lei
leiwang...@gmail.com
From: leiwang...@gmail.com
Moved mapreduce-dev@ to Bcc.
Hi Dharmesh,
The parameter is to set the interval of polling the progress
of the MRAppMaster, not the Map/Reduce tasks. The tasks send
the progress (includes the counter information) to MRAppMaster
every 3000 milliseconds, which is hard-coded.
That's why a sudden bi
When i unzip my jar.. i get the class files inside the package hirarchy
while in your case all files are lying outside? are you sure your jar was
properly created?
On Wed, Apr 16, 2014 at 1:21 PM, laozh...@sina.cn wrote:
> This is the result .
> /home/laozhao0 $ unzip -l myjob.jar
> Archive: /
This is the result ./home/laozhao0 $ unzip -l myjob.jarArchive:
/home/laozhao0/myjob.jar
Length Date Time Name
- -- -
0 04-15-2014 14:54 META-INF/
71 04-15-2014 14:54 META-INF/MANIFEST.MF
1300 04-15-2014 14:54 MyJob$MapClass.class
1628 04-15-2014 14:54 MyJob$Reduce.c
Hi,
I am running with the following command but still, jar is not available to
mapper and reducers.
hadoop jar /home/hduser/workspace/Minerva.jar my.search.Minerva
/user/hduser/input_minerva_actual /user/hduser/output_merva_actual3
-libjars /home/hduser/Documents/Lib/json-simple-1.1.1.jar
-Dmapre
27 matches
Mail list logo