Re: stop-dfs.sh does not work

2013-07-10 Thread deepak rosario tharigopla
Also, You can browse to this location which is the jdk root /usr/lib/jvm/jdk1.6.0_43/bin/ and if you can find jps (jdk1.6 comes with jps but not openjdk and its preferable to use sun jdk6 for hadoop) there simple type jps and execute which will give you all the java process in the JVM Good handy

RE: stop-dfs.sh does not work

2013-07-10 Thread Devaraj k
Hi, Are you trying to stop the DFS with same user or different user? Could you check whether these processes are running or not using 'jps' or 'ps' . Thanks Devaraj k From: YouPeng Yang [mailto:yypvsxf19870...@gmail.com] Sent: 10 July 2013 11:01 To: user@hadoop.apache.org Subject: stop-dfs.sh

Re: HiBench tool not running

2013-07-10 Thread Nitin Pawar
what value have you set for hadoop.job.history.user.location ? On Wed, Jul 10, 2013 at 4:56 AM, Shah, Rahul1 rahul1.s...@intel.com wrote: Hi , ** ** I am running hibench on my Hadoop setup ** ** Not able to initialize History viewer. ** ** Caused by java.io.Exception:

Sqoop and Hadoop

2013-07-10 Thread Fatih Haltas
Hi Everyone, I am trying to import data from postgresql to hdfs via sqoop, however, all examples, i got on internet is talking about hive,hbase etc. kind of system,running within hadoop. I am not using, any of these systems, isnt it possible to import data without having those kind of

Re: Sqoop and Hadoop

2013-07-10 Thread Nitin Pawar
why not? you can use sqoop to import to plain text files or avrofiles or squence files here is one example sqoop import --connect conn --username user -P --table table --columns column1,column2,column3,.. --as-textfile

Re: Sqoop and Hadoop

2013-07-10 Thread Bertrand Dechoux
Hi, You don't need hive nor hbase. A basic hadoop system (hdfs + mapreduce) is enough. I believe the documentation is well done. If you have further questions, you should ask the correct mailing list. http://sqoop.apache.org/mail-lists.html Bertrand On Wed, Jul 10, 2013 at 10:05 AM, Nitin

Re: Sqoop and Hadoop

2013-07-10 Thread Alexander Alten-Lorenz
Moving to u...@sqoop.apache.org For the original question - ONE! google search (sqoop postgres hdfs): http://alexkehayias.tumblr.com/post/44153307024/importing-postgres-data-into-hadoop-hdfs Regards On Jul 10, 2013, at 9:59 AM, Fatih Haltas fatih.hal...@nyu.edu wrote: Hi Everyone, I am

cannot submit a job via java client in hadoop- 2.0.5-alpha

2013-07-10 Thread Francis . Hu
Hi,All I have a hadoop- 2.0.5-alpha cluster with 3 data nodes . I have Resource Manager and all data nodes started and can access web ui of Resource Manager. I wrote a java client to submit a job as TestJob class below. But the job is never submitted successfully. It throws out exception all

Re: Sqoop and Hadoop

2013-07-10 Thread Fatih Haltas
Thanks you all so much. On Wed, Jul 10, 2013 at 12:09 PM, Alexander Alten-Lorenz wget.n...@gmail.com wrote: Moving to u...@sqoop.apache.org For the original question - ONE! google search (sqoop postgres hdfs):

Re: cannot submit a job via java client in hadoop- 2.0.5-alpha

2013-07-10 Thread hadoop hive
Here its showing like you are not using mapreduce.framework.name as yarn, please resend it we are unable to see the configuration On Wed, Jul 10, 2013 at 1:33 AM, Francis.Hu francis...@reachjunction.comwrote: Hi,All ** ** I have a hadoop- 2.0.5-alpha cluster with 3 data nodes . I have

Re: cannot submit a job via java client in hadoop- 2.0.5-alpha

2013-07-10 Thread Azuryy Yu
you didn't set yarn.nodemanager.address in your yarn-site.xml On Wed, Jul 10, 2013 at 4:33 PM, Francis.Hu francis...@reachjunction.comwrote: Hi,All ** ** I have a hadoop- 2.0.5-alpha cluster with 3 data nodes . I have Resource Manager and all data nodes started and can access web

RE: cannot submit a job via java client in hadoop- 2.0.5-alpha

2013-07-10 Thread Devaraj k
Hi Francis, Could you check whether those configuration files are getting loaded or not, There could be a chance that these configuration files are not getting loaded into configuration object due to some invalid path reason.

RE: cannot submit a job via java client in hadoop- 2.0.5-alpha

2013-07-10 Thread Devaraj k
'yarn.nodemanager.address' is not required to submit the Job, it will be required only in NM side. Thanks Devaraj k From: Azuryy Yu [mailto:azury...@gmail.com] Sent: 10 July 2013 16:22 To: user@hadoop.apache.org Subject: Re: cannot submit a job via java client in hadoop- 2.0.5-alpha you

ConnectionException in container, happens only sometimes

2013-07-10 Thread Andrei
Hi, I'm running CDH4.3 installation of Hadoop with the following simple setup: master-host: runs NameNode, ResourceManager and JobHistoryServer slave-1-host and slave-2-hosts: DataNodes and NodeManagers. When I run simple MapReduce job (both - using streaming API or Pi example from

RE: ConnectionException in container, happens only sometimes

2013-07-10 Thread Devaraj k
1. I assume this is the task (container) that tries to establish connection, but what it wants to connect to? It is trying to connect to MRAppMaster for executing the actual task. 1. I assume this is the task (container) that tries to establish connection, but what it wants to connect to? It

Re: ConnectionException in container, happens only sometimes

2013-07-10 Thread Andrei
Hi Devaraj, thanks for your answer. Yes, I suspected it could be because of host mapping, so I have already checked (and have just re-checked) settings in /etc/hosts of each machine, and they all are ok. I use both fully-qualified names (e.g. `master-host.company.com`) and their shortcuts (e.g.

RE: Distributed Cache

2013-07-10 Thread Botelho, Andrew
Ok using job.addCacheFile() seems to compile correctly. However, how do I then access the cached file in my Mapper code? Is there a method that will look for any files in the cache? Thanks, Andrew From: Ted Yu [mailto:yuzhih...@gmail.com] Sent: Tuesday, July 09, 2013 6:08 PM To:

Re: ConnectionException in container, happens only sometimes

2013-07-10 Thread Andrei
If it helps, full log of AM can be found here http://pastebin.com/zXTabyvv . On Wed, Jul 10, 2013 at 4:21 PM, Andrei faithlessfri...@gmail.com wrote: Hi Devaraj, thanks for your answer. Yes, I suspected it could be because of host mapping, so I have already checked (and have just

Re: Issues Running Hadoop 1.1.2 on multi-node cluster

2013-07-10 Thread Leonid Fedotov
Make sure your mapred.local.dir (check it in mapred-site.xml) is actually exists and writable by your mapreduce usewr. Thank you! Sincerely, Leonid Fedotov On Jul 9, 2013, at 6:09 PM, Kiran Dangeti wrote: Hi Siddharth, While running the multi-node we need to take care of the local host

New Distributed Cache

2013-07-10 Thread Botelho, Andrew
Hi, I am trying to store a file in the Distributed Cache during my Hadoop job. In the driver class, I tell the job to store the file in the cache with this code: Job job = Job.getInstance(); job.addCacheFile(new URI(file name)); That all compiles fine. In the Mapper code, I try accessing the

Re: EBADF: Bad file descriptor

2013-07-10 Thread Colin McCabe
That's just a warning message. It's not causing your problem-- it's just a symptom. You will have to find out why the MR job failed. best, Colin On Wed, Jul 10, 2013 at 8:19 AM, Sanjay Subramanian sanjay.subraman...@wizecommerce.com wrote: 2013-07-10 07:11:50,131 WARN [Readahead Thread

Re: EBADF: Bad file descriptor

2013-07-10 Thread Colin McCabe
To clarify a little bit, the readahead pool can sometimes spit out this message if you close a file while a readahead request is in flight. It's not an error and just reflects the fact that the file was closed hastily, probably because of some other bug which is the real problem. Colin On Wed,

NoClassDefFoundError: org/apache/hadoop/yarn/service/CompositeService

2013-07-10 Thread Yu, Libo
Hi, I tried to run the wordcount example with yarn. Here is the command line: hadoop jar share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.0.0-cdh4.3.0.jar wordcount /user/lyu/wordcount/input /user/lyu/wordcount/output But I got this exception: Exception in thread main

Re: New Distributed Cache

2013-07-10 Thread Omkar Joshi
did you try JobContext.getCacheFiles() ? Thanks, Omkar Joshi *Hortonworks Inc.* http://www.hortonworks.com On Wed, Jul 10, 2013 at 10:15 AM, Botelho, Andrew andrew.bote...@emc.comwrote: Hi, ** ** I am trying to store a file in the Distributed Cache during my Hadoop job. In

Re: Distributed Cache

2013-07-10 Thread Omkar Joshi
try JobContext.getCacheFiles() Thanks, Omkar Joshi *Hortonworks Inc.* http://www.hortonworks.com On Wed, Jul 10, 2013 at 6:31 AM, Botelho, Andrew andrew.bote...@emc.comwrote: Ok using job.addCacheFile() seems to compile correctly. However, how do I then access the cached file in my

Re: can not start yarn

2013-07-10 Thread Omkar Joshi
probably you should run jps everytime you start/stop NM/RM. just for you to know whether RM/NM started/stopped successfully or not. devaraj is right.. try checking RM logs.. Thanks, Omkar Joshi *Hortonworks Inc.* http://www.hortonworks.com On Tue, Jul 9, 2013 at 8:20 PM, Devaraj k

Re: ConnectionException in container, happens only sometimes

2013-07-10 Thread Omkar Joshi
can you post RM/NM logs too.? Thanks, Omkar Joshi *Hortonworks Inc.* http://www.hortonworks.com On Wed, Jul 10, 2013 at 6:42 AM, Andrei faithlessfri...@gmail.com wrote: If it helps, full log of AM can be found herehttp://pastebin.com/zXTabyvv . On Wed, Jul 10, 2013 at 4:21 PM, Andrei

RE: Distributed Cache

2013-07-10 Thread Botelho, Andrew
Ok so JobContext.getCacheFiles() retures URI[]. Let's say I only stored one folder in the cache that has several .txt files within it. How do I use that returned URI to read each line of those .txt files? Basically, how do I read my cached file(s) after I call JobContext.getCacheFiles()?

Re: Distributed Cache

2013-07-10 Thread Omkar Joshi
Path[] cachedFilePaths = DistributedCache.getLocalCacheFiles(context.getConfiguration()); for (Path cachedFilePath : cachedFilePaths) { File cachedFile = new File(cachedFilePath.toUri().getRawPath()); System.out.println(cached fie path +

RE: NoClassDefFoundError: org/apache/hadoop/yarn/service/CompositeService

2013-07-10 Thread Devaraj k
Hi Libo, MRAppMaster is not able to load the yarn related jar files. Is this the classpath using by MRAppMaster or any other process?

Re: New Distributed Cache

2013-07-10 Thread Shahab Yunus
Also, once you have the array of URIs after calling getCacheFiles you can iterate over them using File class or Path ( http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/Path.html#Path(java.net.URI) ) Regards, Shahab On Wed, Jul 10, 2013 at 5:08 PM, Omkar Joshi

yarn Failed to bind to: 0.0.0.0/0.0.0.0:8080

2013-07-10 Thread ch huang
i have 3 NM, on the box of one of NM ,the 8080 PORT has already ocuppied by tomcat,so i want to change all NM 8080 port to 8090,but problem is i do not know 8080 port is control by what option in yarn ,anyone can help??

答复: cannot submit a job via java client in hadoop- 2.0.5-alpha

2013-07-10 Thread Francis . Hu
Actually ,I have mapreduce.framework.name configured in mapred-site.xml, see below: property namemapreduce.framework.name/name valueyarn/value descriptionExecution framework set to Hadoop YARN./description /property 发件人: hadoop hive [mailto:hadooph...@gmail.com] 发送时间: Wednesday,

答复: cannot submit a job via java client in hadoop- 2.0.5-alpha

2013-07-10 Thread Francis . Hu
Hi, Devaraj k and Azuryy Yu Thanks both of you. I just get it resolved. The problem is that below highlighted jar is not included in my java client side so that when the Job is initializing, it can not find the class YarnClientProtocolProvider to do further initialization. Then it causes

Re: yarn Failed to bind to: 0.0.0.0/0.0.0.0:8080

2013-07-10 Thread பாலாஜி நாராயணன்
On Wednesday, 10 July 2013, ch huang wrote: i have 3 NM, on the box of one of NM ,the 8080 PORT has already ocuppied by tomcat,so i want to change all NM 8080 port to 8090,but problem is i do not know 8080 port is control by what option in yarn ,anyone can help?? Why would you want to do

Re: NoClassDefFoundError: org/apache/hadoop/yarn/service/CompositeService

2013-07-10 Thread 闫昆
wow! I have the same question and that is terrible,I speed three day times.your ask question is also my resolve way Thanks 2013/7/11 Devaraj k devara...@huawei.com Hi Libo, ** ** MRAppMaster is not able to load the yarn related jar files. ** ** Is this the classpath

Re: yarn Failed to bind to: 0.0.0.0/0.0.0.0:8080

2013-07-10 Thread Hitesh Shah
You are probably hitting a clash with the shuffle port. Take a look at https://issues.apache.org/jira/browse/MAPREDUCE-5036 -- Hitesh On Jul 10, 2013, at 8:19 PM, Harsh J wrote: Please see yarn-default.xml for the list of options you can tweak:

RE: yarn Failed to bind to: 0.0.0.0/0.0.0.0:8080

2013-07-10 Thread Devaraj k
Hi, If you are using the release which doesn't have this patch https://issues.apache.org/jira/browse/MAPREDUCE-5036, then 8080 port will be used by Node Manager shuffle handler service. You can change this default port '8080' to some other value using the configuration mapreduce.shuffle.port

Re: Issues Running Hadoop 1.1.2 on multi-node cluster

2013-07-10 Thread Ram
Hi, Please check all directories/files are existed in local system configured mapres-site.xml and permissions to the files/directories as mapred as user and hadoop as a group. Hi, From, P.Ramesh Babu, +91-7893442722. On Wed, Jul 10, 2013 at 9:36 PM, Leonid Fedotov