Hi All,
I have the doubt in avatar node setup .
I configure the avatarnode using the patch
https://issues.apache.org/jira/browse/HDFS-976
Am I need to configure the NFS filer for share the FSimage file between active
and standby avatarnodes ?
What is the other configurations needed
Hi All,
I have the doubt in avatar node setup .
I configure the avatarnode using the patch
https://issues.apache.org/jira/browse/HDFS-976
Am I need to configure the NFS filer for share the FSimage file between active
and standby avatarnodes ?
What is the other configurations needed
Hi All,
The following patches in the https://issues.apache.org/jira/browse/HDFS-976
link are belongs to which Version of hadoop ?
0001-0.20.3_rc2-AvatarNode.patch
AvatarNode.20.patch
Thanks in Advance .
Regards,
Shanmuganathan
FWIW the trunk/future-branches have new API MultipleInputs you can
pull and include in your project
Can anyone please tell me how I can do the above thing. How can I use
MultipleInputs of higher hadoop version to use it in lower hadoop version.
Thanks
On Wed, Aug 24, 2011 at 5:50 PM, Harsh J
Hi Shanmuganathan,
I am assuming the Facebook team can provide further context here but from
the github repo that is in the JIRA it looks like Release 0.20.3 + FB
Changes (Unreleased) is the version that this was applied to.
You can find the committed changes here:
Can anyone offer me some insight. It may have been due to me trying to run the
start-all.sh script instead of starting the services. Not sure.
Thanks
Sean
/
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host =
One of my colleagues has noticed this problem for a while, and now it's
biting me. Jobs seem to be failing before every really starting. It seems
to be limited (so far) to running in pseudo-distributed mode, since that's
where he saw the problem and where I'm now seeing it; it hasn't come up on
Hi,
I know this question has been asked before, but I could not find
the right solution. Maybe because I use hadoop 0.20.2, some posts assumed
older versions.
My code (relevant chunk):
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
Configuration conf =
Using hadoop 0.20.203.0 single node setup root directory is writable for
everybody despite I've set it's mode to 755 and then even 000 (verified
using -ls)
What could be the problem?
Best regards,
--
S.
I mean hadoop filesystem of course.
On 26 August 2011 18:12, Stanislaw Adaszewski s.adaszew...@gmail.comwrote:
Using hadoop 0.20.203.0 single node setup root directory is writable for
everybody despite I've set it's mode to 755 and then even 000 (verified
using -ls)
What could be the
Ooops never mind, have turned off permissions myself :/ Guess that's enough
for friday evening.
Best regards,
--
S.
On 26 August 2011 18:13, Stanislaw Adaszewski s.adaszew...@gmail.comwrote:
I mean hadoop filesystem of course.
On 26 August 2011 18:12, Stanislaw Adaszewski
Hi -
Is there a way I can start HDFS (the namenode) from a Java main and run unit
tests against that? I need to integrate my Java/HDFS program into unit tests,
and the unit test machine might not have Hadoop installed. I’m currently
running the unit tests by hand with hadoop jar ... My unit
Hi John,
How many tasktrackers do you have? Can you check if your tasktrackers are
running and the total available map and reduce capacity in your cluster?
Can you also post the configuration of the scheduler you are using? You
might also want to check the jobtracker logs. It would help in
On Fri, 26 Aug 2011 11:46:42 -0700, Ramya Sunil ra...@hortonworks.com
wrote:
How many tasktrackers do you have? Can you check if your tasktrackers
are
running and the total available map and reduce capacity in your cluster?
In pseudo-distributed there's one tasktracker, which is running, and
It depends on what scope you want your unit tests to operate at. There is a
class you might want to look into called MiniMRCluster if you are dead set on
having as deep of tests as possible but you can still cover quite a bit with
MRUnit and Junit4/Mockito.
Matt
-Original Message-
Hi Frank,
You can use the ClusterMapReduceCase class from org.apache.hadoop.mapred.
Here is an example of adapting it to Junit4 and running test dfs and
cluster.
https://github.com/sonalgoyal/hiho/blob/master/test/co/nubetech/hiho/common/HihoTestCase.java
And here is a blog post that discusses
On Fri, Aug 26, 2011 at 11:50 AM, John Armstrong john.armstr...@ccri.comwrote:
On Fri, 26 Aug 2011 11:46:42 -0700, Ramya Sunil ra...@hortonworks.com
wrote:
How many tasktrackers do you have? Can you check if your tasktrackers
are
running and the total available map and reduce capacity in
On Fri, 26 Aug 2011 12:20:47 -0700, Ramya Sunil ra...@hortonworks.com
wrote:
Can you also post the configuration of the scheduler you are using? You
might also want to check the jobtracker logs. It would help in further
debugging.
Where would I find the scheduler configuration? I haven't
---
Senthil wants to stay in better touch using some of Google's coolest new
products.
If you already have Gmail or Google Talk, visit:
http://mail.google.com/mail/b-64ed5f4d24-f56e808097-AkoazAk7VfaxoixZKBrhIYpQ3-Y
You'll need
19 matches
Mail list logo