Hi,
I am using Herriot of Hadoop 0.21. However, I meet a problem. What does
test-system mean in the following ant command
ant test-system -Dhadoop.conf.dir.deployed=${HADOOP_CONF_DIR}
from webpage:http://wiki.apache.org/hadoop/HowToUseSystemTestFramework
Thank you very much!
Shuanqi Wang
You can create multiple users on Cluster because it follows LSM (Linux
security model). But Hadoop related process NN, DN, JT, TT need to be
started with one user e.g. hadoop.
Thanks,
Shah
On Fri, Jul 1, 2011 at 4:57 AM, Mitra Kaseebhotla
mitra.kaseebho...@gmail.com wrote:
Not to divert the
Shahnawaz,
If required, the MR and the HDFS processes can be user separated too.
Its generally a good thing to do in practice (so that MR daemons and
user don't get superuser access to HDFS files).
FWIW, I usually flip open the cluster setup guide on ccp.cloudera.com
and everything comes up
Harsh,
Thanks for sharing. Recently, I ran into multiple user issue on cluster
because all the time we were using same user to run MRs so I found below
work around for it.
I am curious to know, How do we separate HDFS processes for users?
On Fri, Jul 1, 2011 at 1:39 PM, Harsh J
On Wed, Jun 29, 2011 at 12:27 AM, Matt Davies m...@mattdavies.net wrote:
... I've seen that our cluster
could use more bandwidth, but it wasn't to the nodes that made the big
difference, it was getting better switches that had better backplanes - the
fabric made the difference.
Any
Hi,
I am not sure if this question (as title) has been asked before, but I
didn't find an answer by googling.
I'd like to explain the scenario of my problem:
My program launches several threads in the same time, while each thread will
submit a hadoop job and wait for the job to complete.
The
Hi Steve
I'd assume that one machine in the cluster doesn't have an /etc/hosts entry
to worker1, or that the DNS server is suffering under load. If you can, put
the host lists into the /etc/hosts table instead of relying on DNS. If you
do it on all machines, it avoids having to work out
I see this target in ./mapreduce/src/contrib/streaming/build.xml and
./mapreduce/src/contrib/gridmix/build.xml. It looks like they are for running
all of the unit tests for thos components.
-Eric
-Original Message-
From: 王栓奇 [mailto:wangshua...@163.com]
Sent: Friday, July 01, 2011
Hi,
I faced a problem that the jobs are still running after executing hadoop
job -kill jobId. I rebooted the cluster but the job still can not be
killed.
The hadoop version is 0.20.2.
Any idea?
Thanks in advance!
--
- Juwei
I had difficulty upgrading applications from Hadoop 0.20.2 to Hadoop
0.20.203.0.
The standalone mode runs without problem. In real cluster mode, the
program freeze at map 0% reduce 0% and there is only one attempt file
in the log directory. The only information is contained in stdout file :
That looks like an ancient version of java. Get 1.6.0_u24 or 25 from oracle.
Upgrade to a recent java and possibly update your c libs.
Edward
On Fri, Jul 1, 2011 at 7:24 PM, Shi Yu sh...@uchicago.edu wrote:
I had difficulty upgrading applications from Hadoop 0.20.2 to Hadoop
0.20.203.0.
Thanks Edward! I upgraded to 1.6.0_26 and it worked.
On 7/1/2011 6:42 PM, Edward Capriolo wrote:
That looks like an ancient version of java. Get 1.6.0_u24 or 25 from oracle.
Upgrade to a recent java and possibly update your c libs.
Edward
On Fri, Jul 1, 2011 at 7:24 PM, Shi
[addressing to common-users@]
this target is there to actually kick-off tests execution. Once you have
instrumented cluster bits are deployed you can start system tests by the
command you've mentioned.
Basically this is exactly what Wiki page is saying, I guess.
Cos
On Thu, Jun 30, 2011 at
13 matches
Mail list logo