Also,
You can browse to this location which is the jdk root
/usr/lib/jvm/jdk1.6.0_43/bin/
and if you can find jps (jdk1.6 comes with jps but not openjdk and its
preferable to use sun jdk6 for hadoop) there simple type jps and execute
which will give you all the java process in the JVM
Good handy
Hi,
Are you trying to stop the DFS with same user or different user?
Could you check whether these processes are running or not using 'jps' or 'ps' .
Thanks
Devaraj k
From: YouPeng Yang [mailto:yypvsxf19870...@gmail.com]
Sent: 10 July 2013 11:01
To: user@hadoop.apache.org
Subject: stop-dfs.sh
what value have you set for hadoop.job.history.user.location ?
On Wed, Jul 10, 2013 at 4:56 AM, Shah, Rahul1 rahul1.s...@intel.com wrote:
Hi ,
** **
I am running hibench on my Hadoop setup
** **
Not able to initialize History viewer.
** **
Caused by java.io.Exception:
Hi Everyone,
I am trying to import data from postgresql to hdfs via sqoop, however, all
examples, i got on internet is talking about hive,hbase etc. kind of
system,running within hadoop.
I am not using, any of these systems, isnt it possible to import data
without having those kind of
why not?
you can use sqoop to import to plain text files or avrofiles or squence
files
here is one example
sqoop import
--connect conn
--username user
-P
--table table
--columns column1,column2,column3,..
--as-textfile
Hi,
You don't need hive nor hbase. A basic hadoop system (hdfs + mapreduce) is
enough.
I believe the documentation is well done.
If you have further questions, you should ask the correct mailing list.
http://sqoop.apache.org/mail-lists.html
Bertrand
On Wed, Jul 10, 2013 at 10:05 AM, Nitin
Moving to u...@sqoop.apache.org
For the original question - ONE! google search (sqoop postgres hdfs):
http://alexkehayias.tumblr.com/post/44153307024/importing-postgres-data-into-hadoop-hdfs
Regards
On Jul 10, 2013, at 9:59 AM, Fatih Haltas fatih.hal...@nyu.edu wrote:
Hi Everyone,
I am
Hi,All
I have a hadoop- 2.0.5-alpha cluster with 3 data nodes . I have Resource
Manager and all data nodes started and can access web ui of Resource
Manager.
I wrote a java client to submit a job as TestJob class below. But the job is
never submitted successfully. It throws out exception all
Thanks you all so much.
On Wed, Jul 10, 2013 at 12:09 PM, Alexander Alten-Lorenz
wget.n...@gmail.com wrote:
Moving to u...@sqoop.apache.org
For the original question - ONE! google search (sqoop postgres hdfs):
Here its showing like you are not using mapreduce.framework.name as yarn,
please resend it we are unable to see the configuration
On Wed, Jul 10, 2013 at 1:33 AM, Francis.Hu francis...@reachjunction.comwrote:
Hi,All
** **
I have a hadoop- 2.0.5-alpha cluster with 3 data nodes . I have
you didn't set yarn.nodemanager.address in your yarn-site.xml
On Wed, Jul 10, 2013 at 4:33 PM, Francis.Hu francis...@reachjunction.comwrote:
Hi,All
** **
I have a hadoop- 2.0.5-alpha cluster with 3 data nodes . I have Resource
Manager and all data nodes started and can access web
Hi Francis,
Could you check whether those configuration files are getting
loaded or not, There could be a chance that these configuration files are not
getting loaded into configuration object due to some invalid path reason.
'yarn.nodemanager.address' is not required to submit the Job, it will be
required only in NM side.
Thanks
Devaraj k
From: Azuryy Yu [mailto:azury...@gmail.com]
Sent: 10 July 2013 16:22
To: user@hadoop.apache.org
Subject: Re: cannot submit a job via java client in hadoop- 2.0.5-alpha
you
Hi,
I'm running CDH4.3 installation of Hadoop with the following simple setup:
master-host: runs NameNode, ResourceManager and JobHistoryServer
slave-1-host and slave-2-hosts: DataNodes and NodeManagers.
When I run simple MapReduce job (both - using streaming API or Pi example
from
1. I assume this is the task (container) that tries to establish connection,
but what it wants to connect to?
It is trying to connect to MRAppMaster for executing the actual task.
1. I assume this is the task (container) that tries to establish connection,
but what it wants to connect to?
It
Hi Devaraj,
thanks for your answer. Yes, I suspected it could be because of host
mapping, so I have already checked (and have just re-checked) settings in
/etc/hosts of each machine, and they all are ok. I use both fully-qualified
names (e.g. `master-host.company.com`) and their shortcuts (e.g.
Ok using job.addCacheFile() seems to compile correctly.
However, how do I then access the cached file in my Mapper code? Is there a
method that will look for any files in the cache?
Thanks,
Andrew
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Tuesday, July 09, 2013 6:08 PM
To:
If it helps, full log of AM can be found here http://pastebin.com/zXTabyvv
.
On Wed, Jul 10, 2013 at 4:21 PM, Andrei faithlessfri...@gmail.com wrote:
Hi Devaraj,
thanks for your answer. Yes, I suspected it could be because of host
mapping, so I have already checked (and have just
Make sure your mapred.local.dir (check it in mapred-site.xml) is actually
exists and writable by your mapreduce usewr.
Thank you!
Sincerely,
Leonid Fedotov
On Jul 9, 2013, at 6:09 PM, Kiran Dangeti wrote:
Hi Siddharth,
While running the multi-node we need to take care of the local host
Hi,
I am trying to store a file in the Distributed Cache during my Hadoop job.
In the driver class, I tell the job to store the file in the cache with this
code:
Job job = Job.getInstance();
job.addCacheFile(new URI(file name));
That all compiles fine. In the Mapper code, I try accessing the
That's just a warning message. It's not causing your problem-- it's just a
symptom.
You will have to find out why the MR job failed.
best,
Colin
On Wed, Jul 10, 2013 at 8:19 AM, Sanjay Subramanian
sanjay.subraman...@wizecommerce.com wrote:
2013-07-10 07:11:50,131 WARN [Readahead Thread
To clarify a little bit, the readahead pool can sometimes spit out this
message if you close a file while a readahead request is in flight. It's
not an error and just reflects the fact that the file was closed hastily,
probably because of some other bug which is the real problem.
Colin
On Wed,
Hi,
I tried to run the wordcount example with yarn. Here is the command line:
hadoop jar share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.0.0-cdh4.3.0.jar
wordcount /user/lyu/wordcount/input /user/lyu/wordcount/output
But I got this exception:
Exception in thread main
did you try JobContext.getCacheFiles() ?
Thanks,
Omkar Joshi
*Hortonworks Inc.* http://www.hortonworks.com
On Wed, Jul 10, 2013 at 10:15 AM, Botelho, Andrew andrew.bote...@emc.comwrote:
Hi,
** **
I am trying to store a file in the Distributed Cache during my Hadoop job.
In
try JobContext.getCacheFiles()
Thanks,
Omkar Joshi
*Hortonworks Inc.* http://www.hortonworks.com
On Wed, Jul 10, 2013 at 6:31 AM, Botelho, Andrew andrew.bote...@emc.comwrote:
Ok using job.addCacheFile() seems to compile correctly.
However, how do I then access the cached file in my
probably you should run jps everytime you start/stop NM/RM. just for you
to know whether RM/NM started/stopped successfully or not.
devaraj is right.. try checking RM logs..
Thanks,
Omkar Joshi
*Hortonworks Inc.* http://www.hortonworks.com
On Tue, Jul 9, 2013 at 8:20 PM, Devaraj k
can you post RM/NM logs too.?
Thanks,
Omkar Joshi
*Hortonworks Inc.* http://www.hortonworks.com
On Wed, Jul 10, 2013 at 6:42 AM, Andrei faithlessfri...@gmail.com wrote:
If it helps, full log of AM can be found herehttp://pastebin.com/zXTabyvv
.
On Wed, Jul 10, 2013 at 4:21 PM, Andrei
Ok so JobContext.getCacheFiles() retures URI[].
Let's say I only stored one folder in the cache that has several .txt files
within it. How do I use that returned URI to read each line of those .txt
files?
Basically, how do I read my cached file(s) after I call
JobContext.getCacheFiles()?
Path[] cachedFilePaths =
DistributedCache.getLocalCacheFiles(context.getConfiguration());
for (Path cachedFilePath : cachedFilePaths) {
File cachedFile = new File(cachedFilePath.toUri().getRawPath());
System.out.println(cached fie path
+
Hi Libo,
MRAppMaster is not able to load the yarn related jar files.
Is this the classpath using by MRAppMaster or any other process?
Also, once you have the array of URIs after calling getCacheFiles you can
iterate over them using File class or Path (
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/Path.html#Path(java.net.URI)
)
Regards,
Shahab
On Wed, Jul 10, 2013 at 5:08 PM, Omkar Joshi
i have 3 NM, on the box of one of NM ,the 8080 PORT has already ocuppied by
tomcat,so i want to change all NM 8080 port to 8090,but problem is
i do not know 8080 port is control by what option in yarn ,anyone can help??
Actually ,I have mapreduce.framework.name configured in mapred-site.xml, see
below:
property
namemapreduce.framework.name/name
valueyarn/value
descriptionExecution framework set to Hadoop YARN./description
/property
发件人: hadoop hive [mailto:hadooph...@gmail.com]
发送时间: Wednesday,
Hi, Devaraj k and Azuryy Yu
Thanks both of you.
I just get it resolved. The problem is that below highlighted jar is not
included in my java client side so that when the Job is initializing, it can
not find the class YarnClientProtocolProvider to do further initialization.
Then it causes
On Wednesday, 10 July 2013, ch huang wrote:
i have 3 NM, on the box of one of NM ,the 8080 PORT has already ocuppied
by tomcat,so i want to change all NM 8080 port to 8090,but problem is
i do not know 8080 port is control by what option in yarn ,anyone can
help??
Why would you want to do
wow! I have the same question and that is terrible,I speed three day
times.your ask question is also my resolve way
Thanks
2013/7/11 Devaraj k devara...@huawei.com
Hi Libo,
** **
MRAppMaster is not able to load the yarn related jar files.
** **
Is this the classpath
You are probably hitting a clash with the shuffle port. Take a look at
https://issues.apache.org/jira/browse/MAPREDUCE-5036
-- Hitesh
On Jul 10, 2013, at 8:19 PM, Harsh J wrote:
Please see yarn-default.xml for the list of options you can tweak:
Hi,
If you are using the release which doesn't have this patch
https://issues.apache.org/jira/browse/MAPREDUCE-5036, then 8080 port will be
used by Node Manager shuffle handler service.
You can change this default port '8080' to some other value using the
configuration mapreduce.shuffle.port
Hi,
Please check all directories/files are existed in local system
configured mapres-site.xml and permissions to the files/directories as
mapred as user and hadoop as a group.
Hi,
From,
P.Ramesh Babu,
+91-7893442722.
On Wed, Jul 10, 2013 at 9:36 PM, Leonid Fedotov
39 matches
Mail list logo