Include the ${HADOOP}/conf/ dir in the classpath of the java program
Alternatively,
u can also try,
bin/hadoop jar your_jar main_class args
-Sagar
Saju K K wrote:
This is in referance with the sample application in the JAVAWord
It works ...
Thanks Sagar
--saju
Sagar Naik-3 wrote:
Include the ${HADOOP}/conf/ dir in the classpath of the java program
Alternatively,
u can also try,
bin/hadoop jar your_jar main_class args
-Sagar
Saju K K wrote:
This is in referance with the sample application in the JAVAWord
Hello!
I'm trying to perform a read test of HDFS files through libhdfs using
the hadoop-0.18.2/src/c++/libhdfs/hdfs_read.c test program. Creating
the files succeeds but reading them fails.
I create two 1MB local files with hdfs_write.c and then I put it under hdfs
using hadoop fs -put.
Hi all
I am testing s3n file system facilities and try to copy from hdfs to S3 in
original format
And I get next errors
08/11/24 05:04:49 INFO mapred.JobClient: Running job: job_200811240437_0004
08/11/24 05:04:50 INFO mapred.JobClient: map 0% reduce 0%
08/11/24 05:05:00 INFO mapred.JobClient:
Mithila Nagendra wrote:
I tried dropping the jar files into the lib. It still doesnt work.. The
following is how the lib looks after the new files were put in:
[EMAIL PROTECTED] hadoop-0.17.2.1]$ cd bin
[EMAIL PROTECTED] bin]$ ls
hadoophadoop-daemon.sh rccstart-all.sh
On Nov 24, 2008, at 9:49 AM, Steve Loughran wrote:
Scott Whitecross wrote:
Thanks Brian. So you have had luck w/ log4j?
We grab logs off machines by not using lo4j and routing to our own
logging infrastructure that can feed events to other boxes via RMI
and queues. This stuff slots in
Scott Whitecross wrote:
Thanks Brian. So you have had luck w/ log4j?
We grab logs off machines by not using lo4j and routing to our own
logging infrastructure that can feed events to other boxes via RMI and
queues. This stuff slots in behind commons-logging, with a custom
commons-logging
Thanks Steve! Will take a look at it..
Mithila
On Mon, Nov 24, 2008 at 6:32 PM, Steve Loughran [EMAIL PROTECTED] wrote:
Mithila Nagendra wrote:
I tried dropping the jar files into the lib. It still doesnt work.. The
following is how the lib looks after the new files were put in:
[EMAIL
Hey Steve
Out of the following which one do I remove - just making sure.. I got rid
of commons-logging-1.0.4.jar
commons-logging-api-1.0.4.jar
commons-logging-1.1.1-sources.jar commons-logging-1.1.1-sources.jar
Thanks!
Mithila
On Mon, Nov 24, 2008 at 6:32 PM, Steve Loughran [EMAIL PROTECTED]
The third German Hadoop get together is going to take place at 9th of December
at newthinking store in Berlin:
http://upcoming.yahoo.com/event/1383706/?ps=6
You can order drinks directly at the bar in the newthinking store. As this Get
Together takes place in December - Christmas time - there
Mithila Nagendra wrote:
Hey Steve
Out of the following which one do I remove - just making sure.. I got rid
of commons-logging-1.0.4.jar
commons-logging-api-1.0.4.jar
commons-logging-1.1.1-sources.jar commons-logging-1.1.1-sources.jar
Hadoop is currently built with
Hey Steve
I deleted what ever I needed to.. still no luck..
You said that the classpath might be messed up.. Is there some way I can
reset it? For the root user? What path do I set it to.
Mithila
On Mon, Nov 24, 2008 at 8:54 PM, Steve Loughran [EMAIL PROTECTED] wrote:
Mithila Nagendra wrote:
Pig questions should be sent to [EMAIL PROTECTED]
The error you're getting usually means that you have a version of
hadoop that doesn't match your version of pig. If you downloaded
latest for hadoop, that will be the case, as pig currently supports
hadoop 0.18, but not 0.19 or top of
Hudon patch verifier is running for last 10 hours on a patch. Is
it stuck or is it normal for it to take so long on some patches?
Abdul Qadeer
Hi,
I am using 0.18.2 with fair scheduler hadoop-3476.
The purpose of fair scheduler is to prevent long running jobs
from blocking short jobs. I gave it a try --- start a long job first, then a
short one. The short job is able to grab some map slot and finishes its map
phase quickly, but it
Release 0.19.0 contains many improvements, new features, bug fixes and
optimizations.
For release details and downloads, visit:
http://hadoop.apache.org/core/releases.html
Thanks to all who contributed to this release!
Nigel
Thanks for your feedback.
I think I have found the initial solution. Since the hadoop job execution
and the web application execution are two different processes. I plan to use
intermediate files as the process communication media. It seems that it is
impossible to call hadoop functions directly
Filed HADOOP-4719 for this.
Nicholas Sze.
- Original Message
From: Tsz Wo (Nicholas), Sze [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Friday, November 21, 2008 7:54:27 AM
Subject: Re: ls command output format
Hi Alex,
Yes, the doc about ls is out-dated. Thanks
Thanks for creating it. I haven't tried Jira yet and didn't know how to do
this.
Alex
2008/11/25 Tsz Wo Sze [EMAIL PROTECTED]
Filed HADOOP-4719 for this.
Nicholas Sze.
- Original Message
From: Tsz Wo (Nicholas), Sze [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent:
On Nov 24, 2008, at 8:44 PM, Mahadev Konar wrote:
Hi Dennis,
I don't think that is possible to do.
No, it is not possible.
The block placement is determined
by HDFS internally (which is local, rack local and off rack).
Actually, it was changed in 0.17 or so to be node-local, off-rack,
Amar, Thanks for the pointer.
-Original Message-
From: Amar Kamat [mailto:[EMAIL PROTECTED]
Sent: Monday, November 24, 2008 8:43 PM
To: core-user@hadoop.apache.org
Subject: Re: do NOT start reduce task until all mappers are finished
Haijun Cao wrote:
Hi,
I am using 0.18.2 with
21 matches
Mail list logo