I have done a
git branch --all | grep "1.2.1"
and haven't got anything back.
Thanks,
Praveen
On Wed, Dec 18, 2013 at 9:01 PM, Jay Vyas wrote:
> Use "git branch --all" To see what branches are there.
>
Guys,
I have a query regarding git. I have cloned Hadoop repo as
git clone git://git.apache.org/hadoop-common.git
I want to checkout the latest Hadoop code (1.2.1), but it gives the below
error.
git checkout release-1.2.1
error: pathspec 'release-1.2.1' did not match any file(s) known to git.
Hi,
The API documentation for o.a.h.filecache.DistributedCache is incomplete.
In the methods inherited none of the methods are URL to the method details.
Without going through the code, it's not possible to get the method
arguments and the usage.
o.a.h.filecache.DistributedCache (@InterfaceAudien
Hi,
Documentation says that the NM sends the status of the completed containers
to the RM and the RM sends it to the AM. This is the interface (1) below.
What is the purpose of the interface (2)?
1) AMRMProtocol has the below method. The AllocateResponse has the list of
completed containers.
p
Hi,
According to the `Hadoop : The Definitive Guide`
>A delegation token is generated by the server (the NameNode in this case),
and can be thought of as a shared secret between the client and the server.
On the first RPC call to the NameNode, the client has no delegation token,
so it uses Kerber
COUNTER_NAME_MAX_KEY =
"mapreduce.job.counters.counter.name.max";
public static final int COUNTER_NAME_MAX_DEFAULT = 64;
public static final String COUNTER_GROUPS_MAX_KEY =
"mapreduce.job.counters.groups.max";
public static final int COUNTER_GROUPS_MAX_DEFAULT = 50;
Regards,
Praveen
On Sat, Dec 24, 2011
Hi,
I find the below code in 203, 205 and 1.0 and not in trunk and other
releases. Is it not in trunk or done some other way? Also, noticed that
some of the parameters are not configurable and hard-coded.
/** limit on the size of the name of the group **
private static final int GROUP_NAME_LIMI
Hi,
How do I know the code changed with a particular JIRA? If I go to
MAPREDUCE-1943, there are multiple patch attachments. Should I go with the
date and pick the latest patch?
Is there any other way to identify the changes done to the code with a
particular JIRA?
Regards,
Praveen
7;ve done a clean and re-build and I still need to explicitly include
> those hadoop common JARS or else the RM won't start. Not sure what is
> actually the problem here.
>
> - Patrick
>
> On Wed, Dec 21, 2011 at 8:25 PM, Praveen Sripati
> wrote:
> > Arun,
> >
&g
# Upgrade to include MAPREDUCE-3588 which got committed yesterday.
>
> Arun
>
>
> On Dec 21, 2011, at 9:34 AM, Praveen Sripati wrote:
>
> > Hi,
> >
> > I am making some change to the 0.23 branch and got the latest code and
> > successfully build it. Am able to s
Hi,
I am making some change to the 0.23 branch and got the latest code and
successfully build it. Am able to start the NameNode and the DataNode, but
when I start the ResourceManager I am getting 'ClassNotFoundException'. The
classpath to run the ResourceManager is not getting generated properly b
6, 2011 at 6:06 PM, Praveen Sripati >wrote:
>
> > Alejandro,
> >
> > Here is the sequence
> >
> > 1. 'svn get '
> > 2. do a build
> > 3. 'svn up' with no changes
> > 4. do a build
> >
> > Tasks (2) and
wrote:
> Maven does incremental builds.
>
> taking time as in?
>
> Thanks.
>
> Alejandro
>
> On Tue, Dec 6, 2011 at 6:31 AM, Praveen Sripati >wrote:
>
> > Could someone please respond to the below query?
> >
> > Regards,
> > Praveen
>
Could someone please respond to the below query?
Regards,
Praveen
On Tue, Nov 22, 2011 at 11:43 AM, Praveen Sripati
wrote:
> Hi,
>
> Does Maven support incremental builds? After `svn up', the build is taking
> time even without any updates from svn.
>
> Thanks,
> Praveen
>
>
>
> > --Bobby Evans
> >
> > On 12/5/11 11:54 AM, "Harsh J" wrote:
> >
> > Praveen,
> >
> > (Inline.)
> >
> > On 05-Dec-2011, at 10:14 PM, Praveen Sripati wrote:
> >
> >> Hi,
> >>
> >> Recently there
Hi,
Recently there was a query about the Hadoop framework being tolerant for
map/reduce task failure towards the job completion. And the solution was to
set the 'mapreduce.map.failures.maxpercent` and
'mapreduce.reduce.failures.maxpercent' properties. Although this feature
was introduced couple of
Hi,
According to
http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html
>Completed containers: Once an ApplicationMaster triggers a launch an
allocated container, it will receive an update from the ResourceManager
when the container completes. The
of adding more docs, for now pls take a look at the
> following older blog post for more details:
>
>
> http://developer.yahoo.com/blogs/hadoop/posts/2011/03/mapreduce-nextgen-scheduler/
>
> Arun
>
> Sent from my iPhone
>
> On Nov 25, 2011, at 11:55 PM, Praveen Srip
Hi,
Let's consider the following scenario
-> The MR Job has an InputSplit on host h1 and h2
-> AM makes a request to the Scheduler for a container on h1 and h2
-> The scheduler responds with containers c1 and c2 on h1 and h2
-> But the AM uses c1 and releases c2 after 15 minutes
In this scenario
`yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms` parameter is
defaulted to to 2000ms (DEFAULT_MR_AM_TO_RM_HEARTBEAT_INTERVAL_MS). So, the
AM sends a Heart Beat every 2000ms to the RM. Along with the Heart Beat, it
is also piggy backing the resource request which seems to be an overhead.
The
Hi,
There had been some mails regarding Hadoop and MPI and the JIRA for the
same is MAPREDUCE-2911. Does it make sense to use MPI for all the
interprocess communication in Hadoop using MPI? If yes, what does it take
to make the changes?
Thanks,
Praveen
Hi,
Does Maven support incremental builds? After `svn up', the build is taking
time even without any updates from svn.
Thanks,
Praveen
Hi,
I am trying to run MRv1 jobs in Eclipse.
I have been able to run in Local (Standalone) Mode, but not in
Pseudo-Distributed Mode. In the Pseudo-Distributed Mode, the below exception
is thrown in the Eclipse console. I see a similar exception in the
tasktracker log file also. I start the
nameno
Steve,
> That said. the new MR engine in 0.23 means that changes to the mapreduce
classes can't be backported from trunk to 0.20.x. Changes there should go
into 0.22 and then into 0.20-security.
As far as I know, 0.22/0.20-security are on the old MR engine and 0.23/trunk
are on the new MR Engine.
Hi,
There seems to be continuous changes to the 'branch-0.20-security' and also
there are references to it once in a while in the mailing list. What is the
significance of the 'branch-0.20-security'? Do all the security related
features go into this branch and then ported to others?
Thanks,
Prave
Hi,
There was a query in StackOverflow regarding high CPU on the client after
submitting jobs (upto 200 jobs in batch and 150MB jar file size).
Calculation of the InputSplit may be one of the reason for the high CPU on
the client. Why should the calculation of the InputSplit happen on the
client?
; [1:08.482s]
> > >> [INFO] hadoop-yarn-server-resourcemanager SKIPPED
> > >> [INFO] hadoop-yarn-server-tests .. SKIPPED
> > >> [INFO] hadoop-yarn-server ........ SKIPPED
> > >> [INFO] hadoop-yarn ..
Hi,
I got the code from SVN for the branch-0.23 and ran the mvn command
svn co http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.23/
mvn clean install package -Pdist -Dtar -DskipTests
There were no errors. But, the hadoop-mapreduce-0.23.0-SNAPSHOT-all.tar.gz
file is not created.
p
-
> [ERROR] Failed to execute goal
> org.codehaus.mojo:make-maven-plugin:1.0-beta-1:make-install (install) on
> project hadoop-yarn-server-nodemanager: make returned an exit value != 0.
> Aborting build; see command output above for more information
Tharindu,
Looks like protoc is not available.
---
Cannot run program "protoc" (in directory "HOME/hadoop-trunk/hadoop-
mapreduce/hadoop-yarn/hadoop-yarn-api"): error=2,
No such file or directory -> [Help 1]
---
Here are instructions to build protoc
See
http://svn.apache.org/repos/asf/hadoop/com
Hi,
When I try to get the code from svn, I get the below error.
svn co http://svn.apache.org/repos/asf/hadoop/common/trunk/
Atrunk/hadoop-mapreduce/bin/mapred-config.sh
Atrunk/hadoop-mapreduce/bin/stop-mapred.sh
Atrunk/hadoop-mapreduce/bin/mapred
Atrunk/hadoop-mapreduce/bin/start
ar
$HADOOP_MAPRED_HOME/build/hadoop-mapred-examples-0.23.0-SNAPSHOT.jar
randomwriter -Dmapreduce.job.user.name=$USER
-Dmapreduce.randomwriter.bytespermap=1 -Ddfs.blocksize=536870912
-Ddfs.block.size=536870912 -libjars
$YARN_INSTALL/modules/hadoop-mapreduce-client-jobclient-1.0-SNAPSHOT.jar
outpu
aware of any devs using eclipse to run MRv2, I haven't tried it
> myself.
>
> OTOH, if you do get to we are happy to follow your lead, please feel free
> to provide us instructions and we can update the README... thanks.
>
> Arun
>
> On Aug 9, 2011, at 7:39 PM, P
Hoayuan,
RunJar is in the hadoop-common-0.23.0-SNAPSHOT.jar. Do a 'hadoop classpath'
and check if the jar file is there in the classpath location.
Similarly, running 'yarn classpath' will provide the classpath for running
the yarn daemons (RM, NM and HS).
Thanks,
Praveen
On Fri, Aug 12, 2011 at
Hi,
I noticed that that yarn-default.xml
(mapreduce/yarn/yarn-server/yarn-server-common/src/main/resources/yarn-default.xml)
has the configurable parameters for yarn.
The installation instructions (1) requires to add the
nodemanager.auxiluary.services and
nodemanager.aux.service.mapreduce.shuffle
Haoyuan,
Try the following
cd $HADOOP_COMMON_HOME/lib
ln -s $HADOOP_COMMON_HOME/target/hadoop-common-0.23.0-SNAPSHOT.jar
For me the HADOOP_COMMON_HOME is set to
/home/praveensripati/Hadoop/trunk/hadoop-common
Thanks,
Praveen
On Thu, Aug 11, 2011 at 1:57 AM, Haoyuan Li wrote:
> Hi,
>
> When
t;
> OTOH, if you do get to we are happy to follow your lead, please feel free
> to provide us instructions and we can update the README... thanks.
>
> Arun
>
> On Aug 9, 2011, at 7:39 PM, Praveen Sripati wrote:
>
> > Hi,
> >
> > Are there any instructions to
/Reduce
Issue Type: New Feature
Components: mrv2
Affects Versions: 0.23.0
Reporter: praveen sripati
Priority: Minor
Fix For: 0.23.0
Make the ResourceManager, NodeManager and HistoryServer run from Eclipse, so
that it would be easy for
ethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > at java.lang.reflect.Method.invoke(Method.ja
u have to use common and hdfs from trunk branch now and only mapreduce
>> comes from MR-279 branch.
>>
>> Something like this:
>> svn co http://svn.apache.org/repos/asf/hadoop/common/trunk/
>> rm -rf mapreduce
>> svn checkout
>> http://svn.apache.org/repos/asf/hadoo
Hi,
Are there any instructions to start the ResourceManager, NodeManager,
HistoryServer from Eclipse? I got the code from SVN, compiled it and ran the
sample program. The projects have also been exported in Eclipse. I want to
run the RM, NM and HS from Eclipse, so as to see the flow and fix any
pr
on for the log files not being generated? I would like
to see the .log files to see the flow of the MRv2.
Thanks,
Praveen
On Tue, Aug 9, 2011 at 8:57 AM, Praveen Sripati wrote:
>
> Hi,
>
> Looks like the below instructions are a bit outdated. I got the mapreduce
> code from the MR-2
Hi,
Looks like the below instructions are a bit outdated. I got the mapreduce
code from the MR-279 branch and the rest of the code from trunk. The
hadoop-mapreduce-1.0-SNAPSHOT-all.tar.gz file got generated successfully.
http://svn.apache.org/repos/asf/hadoop/common/branches/MR-279/mapreduce/INST
Hi,
I did get the latest code from
http://svn.apache.org/repos/asf/hadoop/common/branches/MR-279/
and ran the below command for mapreduce build
mvn clean install assembly:assembly -DskipTests
to get the following error
[INFO] yarn-api .. FAILURE [2.565s
Hi,
MapReduce is about doing the processing where the data is, since network
bandwidth is the bottle neck. Let's suppose everything remaining the
same and the network becomes 10x faster, how will this impact MapReduce?
Data location should no more be a concern and data can be moved to a
node
- MR-279/common/conf
- MR-279/hdfs/conf
- MR-279/mapreduce/conf
- $YARN_INSTALL/conf
- Had to install autoconf 'sudo apt-get install autoconf"on Ubuntu 11.04.
Thanks,
Praveen
On Saturday 18 June 2011 05:22 PM, Praveen Sripati wrote:
Hi,
I have got the code from the svn into int
ipati/Hadoop/MR-279/hdfs
export YARN_HOME=/home/praveensripati/Hadoop/hadoop-mapreduce-1.0-SNAPSHOT
export HADOOP_CONF_DIR=
export YARN_CONF_DIR=$HADOOP_CONF_DIR
Thanks,
Praveen
On Saturday 18 June 2011 08:52 AM, Praveen Sripati wrote:
Hi,
Finally, got all the jars built. Now is the time to ru
"tar" target (which depends on the
"docs" target, which requires forrest 0.8) to unblock the progress, as
mvn-install would suffice for common and hdfs builds.
On Thu, Jun 16, 2011 at 7:55 PM, Praveen Sripati
wrote:
Tom,
I downgraded maven and also changed from open-jdk to sun-jd
ing maven 2.x.
Well I've never tried using java6 for java5 home but I would think it
wouldn't work. I thought it was forrest that required java5. I would
suggest using java5.
Tom
On 6/16/11 12:24 PM, "Praveen Sripati" wrote:
Tom,
Note, it looks like your java5.home is point
nd common
built before building hdfs? Or was common failing with the error you mention
below? If you haven't already you might simply try veryclean on everything
and go again in order.
Tom
On 6/16/11 8:10 AM, "Praveen Sripati" wrote:
Hi,
The hdfs build was successful after in
Thanks,
Praveen
On Thursday 16 June 2011 07:55 AM, Luke Lu wrote:
On Wed, Jun 15, 2011 at 6:45 PM, Praveen Sripati
wrote:
Do I need the avro-maven-plugin? When I ran the below command got the
error that the pom file was not found. Where do I get the jar and the
pom files for the avro-maven-plugin
On Wednesday 15 June 2011 08:19 PM, Thomas Graves wrote:
>
>
> On 6/15/11 8:54 AM, "Praveen Sripati" wrote:
>
>> Hi,
>>
>> I am trying to build and deploy MRv2 and following the instructions in the
>> INSTALL file. The instructions have to be modifi
,
Praveen
On Wednesday 15 June 2011 08:19 PM, Thomas Graves wrote:
On 6/15/11 8:54 AM, "Praveen Sripati" wrote:
Hi,
I am trying to build and deploy MRv2 and following the instructions in the
INSTALL file. The instructions have to be modified after the recent
re-organisation (H
Hi,
I am trying to build and deploy MRv2 and following the instructions in the
INSTALL file. The instructions have to be modified after the recent
re-organisation (HADOOP-7106)
http://svn.apache.org/repos/asf/hadoop/common/branches/MR-279/mapreduce/INSTALL
1. Is the code under MR-279 branch not
the NM will
> be notified via the RM that the container is not longer valid and the
> NM will go ahead and kill the container.
>
>
> On Tue, Jun 14, 2011 at 8:38 PM, Praveen Sripati
> wrote:
> > Mahadev,
> >
> > MapReduce ApplicationMaster might behave well, but wh
;
> I think you mean ApplicationMaster. Yes, the applicationmaster and
> map/reduce tasks talk directly
> without NM being involved.
>
> > Praveen
> >
> > On Wed, Jun 15, 2011 at 12:59 AM, Arun C Murthy
> wrote:
> >
> >>
> >> O
t the NodeManager. Am I
correct?
Praveen
On Wed, Jun 15, 2011 at 12:59 AM, Arun C Murthy wrote:
>
> On Jun 14, 2011, at 6:31 PM, Praveen Sripati wrote:
>
> Hi,
>>
>> I have gone through MapReduce NextGen Blog entries and JIRA and have the
>> following queries
>&
Hi,
I have gone through MapReduce NextGen Blog entries and JIRA and have the
following queries
>> There is a single API between the Scheduler and the ApplicationMaster:
>> (List newContainers, List
containerStatuses) allocate (List ask, List
release)
>> The AM ask for specific resources via
Hi,
I am trying to setup common, hdfs, mapreduce trunk in eclipse. The common
and hdfs setups were OK, but faced some problems with the mapreduce setup.
Here are the steps I followed
1. Checked out the trunk code for the above projects in eclipse.
2. Ran the ant build from eclipse for compile an
59 matches
Mail list logo