http://ozcetindograma.com/jhozk/icadwebiikre.html
en-hui chang
12/3/2013 7:09:55 AM
Hi i
I'm testing the HA auto-failover within hadoop-2.2.0
The cluster can be manully failover ,however failed with the automatic
failover.
I setup the HA according to the URL
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html
When I test
hi,maillist:
i recent get a problem,when i run MR job, it happened OOM in shuffle
process,the options about MR is default,not changed,which option should i
tuning? thanks
Which fencing method you are using in you configuration? Do you have
correct ssh configuration between your hosts?
Regards
Jitendra
On Mon, Dec 2, 2013 at 5:34 PM, YouPeng Yang yypvsxf19870...@gmail.comwrote:
Hi i
I'm testing the HA auto-failover within hadoop-2.2.0
The cluster can be
Post your config files and in which method you are following for automatic
failover
On Mon, Dec 2, 2013 at 5:34 PM, YouPeng Yang yypvsxf19870...@gmail.comwrote:
Hi i
I'm testing the HA auto-failover within hadoop-2.2.0
The cluster can be manully failover ,however failed with the
Hi Pavan
I'm using sshfence
--core-site.xml-
configuration
property
namefs.defaultFS/name
valuehdfs://lklcluster/value
finaltrue/final
/property
property
namehadoop.tmp.dir/name
value/home/hadoop/tmp2/value
/property
/configuration
Are you able to connect both NN hosts using SSH without password?
Make sure you have correct ssh keys in authorized key file.
Regards
Jitendra
On Mon, Dec 2, 2013 at 5:50 PM, YouPeng Yang yypvsxf19870...@gmail.comwrote:
Hi Pavan
I'm using sshfence
--core-site.xml-
Hi Jitendra
Yes
I'm doubt that it need to enter the ssh-agent bash ssh-add before I
ssh the NN from each other.Is it an problem?
Regards
2013/12/2 Jitendra Yadav jeetuyadav200...@gmail.com
Are you able to connect both NN hosts using SSH without password?
Make sure you have correct
If you are using hadoop user and you have correct ssh conf then below
commands
should works without password.
Execute from NN2 NN1
# ssh hadoop@NN1_host
Execute from NN2 NN1
# ssh hadoop@NN2_host
Regards
Jitendra
On Mon, Dec 2, 2013 at 6:10 PM, YouPeng Yang
Hi Daniel,
first of all, before posting to a mailing list, take a deep breath and
let your frustrations out. Then write the email. Using words like
crappy, toxicware, nightmare are not going to help you getting
useful responses.
While I agree that the docs can be confusing, we should try to stay
Hello,
What is the best way to pass job configuration parameter to class like
GroupingComparator which is instantiated by hadoop. I know there is setup
method in map class and probably I can initialize some static variable in
setup and use it in GroupingComparator, not sure that is correct
Hi
Thanks for your reply. It works.
Formerly, I setup the ssh with a passwd,and before start-dfs.sh or
stop-dfs.sh ,it needs to enter password once by enter ssh-agent bash and
ssh-add.
Now I recreate the rsa without a passwd.Finnaly it work -HA does the
automatic-failover..
But I do
Yes, the user is responsible for using the correct model for a given piece
of testing (or unlabeled) data.
2013/12/2 unmesha sreeveni unmeshab...@gmail.com
To make it more general, it's better to separate them. Since there might
be multiple batches of training (or to-be-label), and you only
Hi team
I see following errors on datanodes. What is the reason for this and how can it
will be resolved:-
2013-12-02 13:11:36,441 WARN org.apache.hadoop.hdfs.DFSClient: DFSOutputStream
ResponseProcessor exception for block
Hi,
Can you share some more logs from Data nodes? could you please also share
the conf and cluster size?
Regards
Jitendra
On Mon, Dec 2, 2013 at 8:49 PM, Siddharth Tiwari
siddharth.tiw...@live.comwrote:
Hi team
I see following errors on datanodes. What is the reason for this and how
can
Hi Jeet
I have a cluster of size 25, 4 Admin nodes and 21 datanodes.2 NN 2 JT 3
Zookeepers and 3 QJNs
if you could help me in understanding what kind of logs you want I will provide
it to you. Do you need hdfs-site.xml, core-site.xml and mapred-site.xmls ?
**
Cheers !!!
Which hadoop destro you are using?, It would be good if you share the logs
from data node on which the data block(blk_-2927699636194035560_63092)
exist and from name nodes also.
Regards
Jitendra
On Mon, Dec 2, 2013 at 9:13 PM, Siddharth Tiwari
siddharth.tiw...@live.comwrote:
Hi Jeet
I have
Hi All, I posted up a Mahout canopy generation related troubleshoot
last week; however, I didn't get the problem solved. The message below
is the error I received. I'm trying to run canopy generation about 900
mb worth of information. There are estimated about 120,000 vectors.
I'm currently
Hi
Follow the example provided in
Yarn_dist/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell.
regards
tmp
2013/12/1 Yue Wang terra...@gmail.com
Hi,
I found the page (
Hello,
I am trying to understand why my long-running mapreduce jobs stop after 24
hours (approx) on a secure cluster.
This is on Cloudera CDH 4.3.0, hence hadoop 2.0.0, using mrv1 (not yarn),
authentication specified as kerberos. Trying with a short-lived Kerberos
ticket (1h) I see that it
The comparators are also initialised via the ReflectionUtils code, so
they do try to pass configuration onto the instantiated object if the
class implements the org.apache.hadoop.conf.Configurable interface or
extends the org.apache.hadoop.conf.Configured class (which implements
the interface for
Ian,
This sounds like a good idea. Please feel free to file a Jira for it.
Arpit
On Fri, Nov 22, 2013 at 10:20 AM, Ian Jackson
ian_jack...@trilliumsoftware.com wrote:
It would be nice if HADOOP_CONF_DIR could be set in the environment like
YARN_CONF_DIR. This could be done in
Hi Daniel,
I agree with you that 2.2 documents are very unfriendly.
In many documents, the change in 2.2 from 1.x is just a format.
There are still many documents to be converted. (ex. Hadoop Streaming)
Furthermore, there are a lot of dead links in documents.
I've been trying to fix dead links,
version is rewally important here..
- If 1.x, then Where (NN , JT , TT ?)
- if 2.x, then where? (AM, NM, ... ?) -- probably less likely here, since
the resources are ephemeral.
I know that some older 1x versions had an issue with the jobtracker having
an ever-expanding hashmap or something like
Hi,
I tried sending this message earlier, but I think I may not have been
subscribed yet as it doesn't seem to have gone through. Apologies if you
have already received it, then.
I've been working on adding a feature to our project that uses libhdfs to
store data in HDFS. I've written a simple
Could you please try:
$ export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:json-simple-1.1.1.jar
$ hadoop jar domain_gold.jar org.select.Driver -libjars
json-simple-1.1.1.jar $INPUT1 $INPUT2 $OUTPUT
2013/10/24 jamal sasha jamalsha...@gmail.com
OOps..forgot the code:
http://pastebin.com/7XnyVnkv
Robert,
YARN, by default, will only download *resource* from a shared namespace (e.g.
HDFS).
If /home/hadoop/robert/large_jar.jar is available on each node then you can
specify path as file:///home/hadoop/robert/large_jar.jar and it should work.
Else, you'll need to copy
Daniel,
Apologies if you had a bad experience. If you can point them out to us, we'd
be more than happy to fix it - alternately, we'd *love* it if you could help us
improve docs too.
Now, for the problem at hand:
http://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo is one place to look.
Thanks Arun,
I already read and did everything recommended at the referred URL. There
isn't any error message in the logfiles. The only error message appears
when I try to put a non-zero file on the HDFS as posted above. Beside that,
absolutely nothing in the logs is telling me something is wrong
hi,maillist:
i run a job on my CDH4.4 yarn framework ,it's map task
finished very fast,but reduce is very slow, i check it use ps command
find it's work heap size is 200m,so i try to increase heap size used by
reduce task,i add YARN_OPTS=$YARN_OPTS
Hi
Another auto-failover testing problem:
My HA can auto-failover after I kill the active NN.When it comes to the
unplug network interface to simulate the hardware fail,the auto-failover
seems not to work after wait for times -the zkfc logs as [1].
I'm using the default sshfence.
another question is why the map process progress will back when it reach
100%?
On Tue, Dec 3, 2013 at 10:07 AM, ch huang justlo...@gmail.com wrote:
hi,maillist:
i run a job on my CDH4.4 yarn framework ,it's map task
finished very fast,but reduce is very slow, i check it use
This is still because your fence method configuraed improperly.
plseae paste your fence configuration. and double check you can ssh on
active NN to standby NN without password.
On Tue, Dec 3, 2013 at 10:23 AM, YouPeng Yang yypvsxf19870...@gmail.comwrote:
Hi
Another auto-failover testing
Hi Yu
Thanks for your response.
I'm sure my ssh setup is good. Ssh from act NN to stanby nn need no
password.
I attached my config
--core-site.xml-
configuration
property
namefs.defaultFS/name
valuehdfs://lklcluster/value
finaltrue/final
/property
Hi Yu
I think when unplug the nic ,the ssh could not make through because it
can not connect to failed active NN.
Suppose that ,the sshfence will failed.
Am I right?
2013/12/3 YouPeng Yang yypvsxf19870...@gmail.com
Hi Yu
Thanks for your response.
I'm sure my ssh setup is good.
Thank you Yexi...Thanks for spending your valuable time.
On Mon, Dec 2, 2013 at 8:22 PM, Yexi Jiang yexiji...@gmail.com wrote:
Yes, the user is responsible for using the correct model for a given piece
of testing (or unlabeled) data.
2013/12/2 unmesha sreeveni unmeshab...@gmail.com
To
13/12/03 11:46:56 INFO mapreduce.Job: map 100% reduce 19%
13/12/03 11:47:33 INFO mapreduce.Job: map 100% reduce 20%
13/12/03 11:47:54 INFO mapreduce.Job: map 100% reduce 21%
13/12/03 11:48:06 INFO mapreduce.Job: map 100% reduce 22%
13/12/03 11:48:17 INFO mapreduce.Job: map 100% reduce 23%
Hi Team,
I have set up a one node cluster with hadoop-2.2.0. When i run sample
wordcount example I don't see it in web-UI on *ip:8088 *nor in jobHistory
though my job gets submitted successfully and I see the output file also
under the specified path.
Attached are the configuration files that I
Hi Divya
Please try using this property in client side
property
namemapreduce.framework.name/name
valueyarn/value
/property
Regards
Nishan
From: divya rai [mailto:divya1...@gmail.com]
Sent: 03 December 2013 11:05 AM
To: user@hadoop.apache.org
Subject: Problem viewing a job in
Can you try with 'yarn application' command ? there are a few options you
can use to view the application status with that command.
Also don't forget click the 'ALL Applications' link on the web UI.
Thanks,
Jian
On Mon, Dec 2, 2013 at 9:34 PM, divya rai divya1...@gmail.com wrote:
Hi Team,
Hi Divya
Please try using this property in client side
property
namemapreduce.framework.name/name
valueyarn/value
/property
Regards
Nishan
From: divya rai [mailto:divya1...@gmail.com]
Sent: 03 December 2013 11:05 AM
To: user@hadoop.apache.org
Subject: Problem viewing a job in
This link may help you with the configuration
http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
Thanks,
Jian
On Mon, Dec 2, 2013 at 7:53 PM, ch huang justlo...@gmail.com wrote:
13/12/03 11:46:56 INFO mapreduce.Job: map 100% reduce 19%
13/12/03 11:47:33 INFO
Hi Everyone
I am now testing the best performance of my cluster
Can anyone give me some *formulas or suggestion* for all setting values in
MR program?
eg.
io.sort.mb
io.sort.spill.percent
mapred.local.dir
mapred.child.java.opts etc.
Any details would be great
BRs
Geelong
--
From Good To
Hi All,
I am doing a test migration from Apache Hadoop-1.2.0 to Apache
Hadoop-2.0.6-alpha on a single node environment.
I did the following:
* Installed Apache Hadoop-1.2.0
* Ran word count sample MR jobs. The jobs executed successfully.
* I stop all the services in
44 matches
Mail list logo