Re: [DISCUSSION] Merging HDFS-7240 Object Store (Ozone) to trunk

2017-11-03 Thread Ravi Prakash
Hi folks!

Thank you for sharing the design docs and the tremendous amount of work
that has gone into Ozone. I'm grateful that atleast someone is trying to
drastically improve HDFS.

*If* there is a meeting to discuss this merge, could I please also be
invited?

Have we ever thought about distributing the Namenode metadata across nodes
dynamically based on load and RPC times (unlike static federation that we
have now)?

Also, I think a major feature that HDFS still lacks (and a lot of our users
ask for) is BCP / Disaster Recovery. I only bring this up to see if the
choice of proposed design would have implications for that later on.

Thanks,
Ravi

On Fri, Nov 3, 2017 at 1:56 PM, sanjay Radia  wrote:

> Konstantine,
>  Thanks for your comments, questions and feedback. I have attached a
> document to the HDFS-7240 jira
>  that explains a design for scaling HDFS and how Ozone paves the way
> towards the full solution.
>
>
> https://issues.apache.org/jira/secure/attachment/
> 12895963/HDFS%20Scalability%20and%20Ozone.pdf
>
>
> sanjay
>
>
>
>
> > On Oct 28, 2017, at 2:00 PM, Konstantin Shvachko 
> wrote:
> >
> > Hey guys,
> >
> > It is an interesting question whether Ozone should be a part of Hadoop.
> > There are two main reasons why I think it should not.
> >
> > 1. With close to 500 sub-tasks, with 6 MB of code changes, and with a
> > sizable community behind, it looks to me like a whole new project.
> > It is essentially a new storage system, with different (than HDFS)
> > architecture, separate S3-like APIs. This is really great - the World
> sure
> > needs more distributed file systems. But it is not clear why Ozone should
> > co-exist with HDFS under the same roof.
> >
> > 2. Ozone is probably just the first step in rebuilding HDFS under a new
> > architecture. With the next steps presumably being HDFS-10419 and
> > HDFS-8.
> > The design doc for the new architecture has never been published. I can
> > only assume based on some presentations and personal communications that
> > the idea is to use Ozone as a block storage, and re-implement NameNode,
> so
> > that it stores only a partial namesapce in memory, while the bulk of it
> > (cold data) is persisted to a local storage.
> > Such architecture makes me wonder if it solves Hadoop's main problems.
> > There are two main limitations in HDFS:
> >  a. The throughput of Namespace operations. Which is limited by the
> number
> > of RPCs the NameNode can handle
> >  b. The number of objects (files + blocks) the system can maintain. Which
> > is limited by the memory size of the NameNode.
> > The RPC performance (a) is more important for Hadoop scalability than the
> > object count (b). The read RPCs being the main priority.
> > The new architecture targets the object count problem, but in the expense
> > of the RPC throughput. Which seems to be a wrong resolution of the
> tradeoff.
> > Also based on the use patterns on our large clusters we read up to 90% of
> > the data we write, so cold data is a small fraction and most of it must
> be
> > cached.
> >
> > To summarize:
> > - Ozone is a big enough system to deserve its own project.
> > - The architecture that Ozone leads to does not seem to solve the
> intrinsic
> > problems of current HDFS.
> >
> > I will post my opinion in the Ozone jira. Should be more convenient to
> > discuss it there for further reference.
> >
> > Thanks,
> > --Konstantin
> >
> >
> >
> > On Wed, Oct 18, 2017 at 6:54 PM, Yang Weiwei 
> wrote:
> >
> >> Hello everyone,
> >>
> >>
> >> I would like to start this thread to discuss merging Ozone (HDFS-7240)
> to
> >> trunk. This feature implements an object store which can co-exist with
> >> HDFS. Ozone is disabled by default. We have tested Ozone with cluster
> sizes
> >> varying from 1 to 100 data nodes.
> >>
> >>
> >>
> >> The merge payload includes the following:
> >>
> >>  1.  All services, management scripts
> >>  2.  Object store APIs, exposed via both REST and RPC
> >>  3.  Master service UIs, command line interfaces
> >>  4.  Pluggable pipeline Integration
> >>  5.  Ozone File System (Hadoop compatible file system implementation,
> >> passes all FileSystem contract tests)
> >>  6.  Corona - a load generator for Ozone.
> >>  7.  Essential documentation added to Hadoop site.
> >>  8.  Version specific Ozone Documentation, accessible via service UI.
> >>  9.  Docker support for ozone, which enables faster development cycles.
> >>
> >>
> >> To build Ozone and run ozone using docker, please follow instructions in
> >> this wiki page. https://cwiki.apache.org/confl
> >> uence/display/HADOOP/Dev+cluster+with+docker.
> >>
> >>
> >> We have built a passionate and diverse community to drive this feature
> >> development. As a team, we have achieved significant progress in past 3
> >> years since first JIRA for HDFS-7240 was opened on Oct 2014. So far, we
> >> have resolved almost 400 JIRAs by 20+ 

Re: [VOTE] Release Apache Hadoop 2.8.2 (RC1)

2017-10-24 Thread Ravi Prakash
Thanks for all your hard work Junping!

* Checked signature.
* Ran a sleep job.
* Checked NN File browser UI works.

+1 (binding)

Cheers
Ravi

On Tue, Oct 24, 2017 at 12:26 PM, Rakesh Radhakrishnan 
wrote:

> Thanks Junping for getting this out.
>
> +1 (non-binding)
>
> * Built from source on CentOS 7.3.1611, jdk1.8.0_111
> * Deployed 3 node cluster
> * Ran some sample jobs
> * Ran balancer
> * Operate HDFS from command line: ls, put, dfsadmin etc
> * HDFS Namenode UI looks good
>
>
> Thanks,
> Rakesh
>
> On Fri, Oct 20, 2017 at 6:12 AM, Junping Du  wrote:
>
> > Hi folks,
> >  I've created our new release candidate (RC1) for Apache Hadoop
> 2.8.2.
> >
> >  Apache Hadoop 2.8.2 is the first stable release of Hadoop 2.8 line
> > and will be the latest stable/production release for Apache Hadoop - it
> > includes 315 new fixed issues since 2.8.1 and 69 fixes are marked as
> > blocker/critical issues.
> >
> >   More information about the 2.8.2 release plan can be found here:
> > https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
> >
> >   New RC is available at: http://home.apache.org/~
> > junping_du/hadoop-2.8.2-RC1 > du/hadoop-2.8.2-RC0>
> >
> >   The RC tag in git is: release-2.8.2-RC1, and the latest commit id
> > is: 66c47f2a01ad9637879e95f80c41f798373828fb
> >
> >   The maven artifacts are available via repository.apache.org
>  > repository.apache.org/> at: https://repository.apache.org/
> > content/repositories/orgapachehadoop-1064 > repository.apache.org/content/repositories/orgapachehadoop-1062>
> >
> >   Please try the release and vote; the vote will run for the usual 5
> > days, ending on 10/24/2017 6pm PST time.
> >
> > Thanks,
> >
> > Junping
> >
> >
>


Re: [VOTE] Merge feature branch YARN-5355 (Timeline Service v2) to trunk

2017-08-31 Thread Ravi Prakash
+1 to maintaining history.

On Wed, Aug 30, 2017 at 11:38 PM, varunsax...@apache.org <
varun.saxena.apa...@gmail.com> wrote:

> Yes, I had used "git merge --no-ff"  while merging ATSv2 to trunk.
> Maintaining history I believe can be useful as it can make reverts
> easier if at all required.
> And can be an easy reference point to look at who had contributed what
> without having to go back to the branch.
>
> Regards,
> Varun Saxena.
>
> On Thu, Aug 31, 2017 at 3:56 AM, Vrushali C 
> wrote:
>
> > Thanks Sangjin for the link to the previous discussions on this! I think
> > that helps answer Steve's questions.
> >
> > As decided on that thread [1], YARN-5355 as a feature branch was merged
> to
> > trunk via "git merge --no-ff" .
> >
> > Although trunk already had TSv2 code (alpha1) prior to this merge, we
> > chose to develop on a feature branch YARN-5355 so that we could control
> > when changes went into trunk and didn't inadvertently disrupt trunk.
> >
> > Is the latest merge causing any conflicts or issues for s3guard, Steve?
> >
> > thanks
> > Vrushali
> > [1] https://lists.apache.org/thread.html/43cd65c6b6c3c0e8ac2b3c76afd9ef
> > f1f78b177fabe9c4a96d9b3d0b@1440189889@%3Ccommon-dev.hadoop.apache.org%3E
> >
> >
> > On Wed, Aug 30, 2017 at 2:37 PM, Sangjin Lee  wrote:
> >
> >> I recall this discussion about a couple of years ago:
> >> https://lists.apache.org/thread.html/43cd65c6b6c3c0e8ac
> >> 2b3c76afd9eff1f78b177fabe9c4a96d9b3d0b@1440189889@%3Ccommon-
> >> dev.hadoop.apache.org%3E
> >>
> >> On Wed, Aug 30, 2017 at 2:32 PM, Steve Loughran  >
> >> wrote:
> >>
> >>> I'd have assumed it would have gone in as one single patch, rather than
> >>> a full history. I don't see why the trunk needs all the evolutionary
> >>> history of a build.
> >>>
> >>> What should our policy/process be here?
> >>>
> >>> I do currently plan to merge the s3guard in as one single squashed
> >>> patch; just getting HADOOP-14809 sorted first.
> >>>
> >>>
> >>> > On 30 Aug 2017, at 07:09, Vrushali C 
> wrote:
> >>> >
> >>> > I'm adding my +1 (binding) to conclude the vote.
> >>> >
> >>> > With 13 +1's (11 binding) and no -1's, the vote passes. We'll get on
> >>> with
> >>> > the merge to trunk shortly. Thanks everyone!
> >>> >
> >>> > Regards
> >>> > Vrushali
> >>> >
> >>> >
> >>> > On Tue, Aug 29, 2017 at 10:54 AM, varunsax...@apache.org <
> >>> > varun.saxena.apa...@gmail.com> wrote:
> >>> >
> >>> >> +1 (binding).
> >>> >>
> >>> >> Kudos to all the team members for their great work!
> >>> >>
> >>> >> Being part of the ATSv2 team, I have been involved with either
> >>> development
> >>> >> or review of most of the JIRAs'.
> >>> >> Tested ATSv2 in both secure and non-secure mode. Also verified that
> >>> there
> >>> >> is no impact when ATSv2 is turned off.
> >>> >>
> >>> >> Regards,
> >>> >> Varun Saxena.
> >>> >>
> >>> >> On Tue, Aug 22, 2017 at 12:02 PM, Vrushali Channapattan <
> >>> >> vrushalic2...@gmail.com> wrote:
> >>> >>
> >>> >>> Hi folks,
> >>> >>>
> >>> >>> Per earlier discussion [1], I'd like to start a formal vote to
> merge
> >>> >>> feature branch YARN-5355 [2] (Timeline Service v.2) to trunk. The
> >>> vote
> >>> >>> will
> >>> >>> run for 7 days, and will end August 29 11:00 PM PDT.
> >>> >>>
> >>> >>> We have previously completed one merge onto trunk [3] and Timeline
> >>> Service
> >>> >>> v2 has been part of Hadoop release 3.0.0-alpha1.
> >>> >>>
> >>> >>> Since then, we have been working on extending the capabilities of
> >>> Timeline
> >>> >>> Service v2 in a feature branch [2] for a while, and we are
> reasonably
> >>> >>> confident that the state of the feature meets the criteria to be
> >>> merged
> >>> >>> onto trunk and we'd love folks to get their hands on it in a test
> >>> capacity
> >>> >>> and provide valuable feedback so that we can make it
> >>> production-ready.
> >>> >>>
> >>> >>> In a nutshell, Timeline Service v.2 delivers significant
> scalability
> >>> and
> >>> >>> usability improvements based on a new architecture. What we would
> >>> like to
> >>> >>> merge to trunk is termed "alpha 2" (milestone 2). The feature has a
> >>> >>> complete end-to-end read/write flow with security and read level
> >>> >>> authorization via whitelists. You should be able to start setting
> it
> >>> up
> >>> >>> and
> >>> >>> testing it.
> >>> >>>
> >>> >>> At a high level, the following are the key features that have been
> >>> >>> implemented since alpha1:
> >>> >>> - Security via Kerberos Authentication and delegation tokens
> >>> >>> - Read side simple authorization via whitelist
> >>> >>> - Client configurable entity sort ordering
> >>> >>> - Richer REST APIs for apps, app attempts, containers, fetching
> >>> metrics by
> >>> >>> timerange, pagination, sub-app entities
> >>> >>> - Support for storing sub-application entities (entities that exist
> >>> >>> outside
> >>> >>> the scope of an application)
> >>> >>> - 

Re: Branch merges and 3.0.0-beta1 scope

2017-08-23 Thread Ravi Prakash
Also, when people +1 a merge, can they please describe if they did testing
/ use the feature in addition to what is already described in the thread?

On Wed, Aug 23, 2017 at 11:18 AM, Vrushali Channapattan <
vrushalic2...@gmail.com> wrote:

> For timeline service v2, we have completed all subtasks under YARN-5355
> [1].
>
> We initiated a merge-to-trunk vote [2] yesterday.
>
> thanks
> Vrushali
> [1] https://issues.apache.org/jira/browse/YARN-5355
> [2]
> http://mail-archives.apache.org/mod_mbox/hadoop-common-
> dev/201708.mbox/%3CCAE=b_fbLT2J+Ezb4wqdN_UwBiG1Sd5kpqGaw+9Br__zou5yNTQ@
> mail.gmail.com%3E
>
>
> On Wed, Aug 23, 2017 at 11:12 AM, Vinod Kumar Vavilapalli <
> vino...@apache.org> wrote:
>
> > Agreed. I was very clearly not advocating for rushing in features. If you
> > have followed my past emails, I have only strongly advocated features be
> > worked in branches and get merged when they are in a reasonable state.
> >
> > Each branch contributor group should look at their readiness and merge
> > stuff in assuming that the branch reached a satisfactory state. That’s
> it.
> >
> > From release management perspective, blocking features just because we
> are
> > a month close to the deadline is not reasonable. Let the branch
> > contributors rationalize, make this decision and the rest of us can
> support
> > them in making the decision.
> >
> > +Vinod
> >
> > > At this point, there have been three planned alphas from September 2016
> > until July 2017 to "get in features".  While a couple of upcoming
> features
> > are "a few weeks" away, I think all of us are aware how predictable
> > software development schedules can be.  I think we can also all agree
> that
> > rushing just to meet a release deadline isn't the best practice when it
> > comes to software development either.
> > >
> > > Andrew has been very clear about his goals at each step and I think
> > Wangda's willingness to not rush in resource types was an appropriate
> > response.  I'm sympathetic to the goals of getting in a feature for 3.0,
> > but it might be a good idea for each project that is a "few weeks away"
> to
> > seriously look at the readiness compared to the features which have been
> > testing for 6+ months already.
> > >
> > > -Ray
> >
> >
>


Re: Are binary artifacts are part of a release?

2017-08-15 Thread Ravi Prakash
bq. My stance is that if we're going to publish something, it should be
good, or we shouldn't publish it at all.

I agree

On Tue, Aug 15, 2017 at 2:57 AM, Steve Loughran 
wrote:

>
> > On 15 Aug 2017, at 07:14, Andrew Wang  wrote:
> >
> > To close the thread on this, I'll try to summarize the LEGAL JIRA. I
> wasn't
> > able to convince anyone to make changes to the apache.org docs.
> >
> > Convenience binary artifacts are not official release artifacts and thus
> > are not voted on. However, since they are distributed by Apache, they are
> > still subject to the same distribution requirements as official release
> > artifacts. This means they need to have a LICENSE and NOTICE file, follow
> > ASF licensing rules, etc. The PMC needs to ensure that binary artifacts
> > meet these requirements.
> >
> > However, being a "convenience" artifact doesn't mean it isn't important.
> > The appropriate level of quality for binary artifacts is left up to the
> > project. An OpenOffice person mentioned the quality of their binary
> > artifacts is super important since very few of their users will compile
> > their own office suite.
> >
> > I don't know if we've discussed the topic of binary artifact quality in
> > Hadoop. My stance is that if we're going to publish something, it should
> be
> > good, or we shouldn't publish it at all. I think we do want to publish
> > binary tarballs (it's the easiest way for new users to get started with
> > Hadoop), so it's fair to consider them when evaluating a release.
> >
> > Best,
> > Andrew
> >
>
>
> Given we publish the artifacts to the m2 repo, which is very much a
> downstream distribution mechanism. For other redist mechanisms (yum,
> apt-get) its implicitly handled by whoever manages those repos.
>
> > On Mon, Jul 31, 2017 at 8:43 PM, Konstantin Shvachko <
> shv.had...@gmail.com>
> > wrote:
> >
> >> It does not. Just adding historical references, as Andrew raised the
> >> question.
> >>
> >> On Mon, Jul 31, 2017 at 7:38 PM, Allen Wittenauer <
> >> a...@effectivemachines.com> wrote:
> >>
> >>>
> >>> ... that doesn't contradict anything I said.
> >>>
>  On Jul 31, 2017, at 7:23 PM, Konstantin Shvachko <
> shv.had...@gmail.com>
> >>> wrote:
> 
>  The issue was discussed on several occasions in the past.
>  Took me a while to dig this out as an example:
>  http://mail-archives.apache.org/mod_mbox/hadoop-general/2011
> >>> 11.mbox/%3C4EB0827C.6040204%40apache.org%3E
> 
>  Doug Cutting:
>  "Folks should not primarily evaluate binaries when voting. The ASF
> >>> primarily produces and publishes source-code
>  so voting artifacts should be optimized for evaluation of that."
> 
>  Thanks,
>  --Konst
> 
>  On Mon, Jul 31, 2017 at 4:51 PM, Allen Wittenauer <
> >>> a...@effectivemachines.com> wrote:
> 
> > On Jul 31, 2017, at 4:18 PM, Andrew Wang 
> >>> wrote:
> >
> > Forking this off to not distract from release activities.
> >
> > I filed https://issues.apache.org/jira/browse/LEGAL-323 to get
> >>> clarity on the matter. I read the entire webpage, and it could be
> improved
> >>> one way or the other.
> 
> 
> IANAL, my read has always lead me to believe:
> 
> * An artifact is anything that is uploaded to dist.a.o
> >>> and repository.a.o
> * A release consists of one or more artifacts
> >>> ("Releases are, by definition, anything that is published beyond the
> group
> >>> that owns it. In our case, that means any publication outside the
> group of
> >>> people on the product dev list.")
> * One of those artifacts MUST be source
> * (insert voting rules here)
> * They must be built on a machine in control of the RM
> * There are no exceptions for alpha, nightly, etc
> * (various other requirements)
> 
> i.e., release != artifact  it's more like release =
> >>> artifact * n .
> 
> Do you have to have binaries?  No (e.g., Apache SpamAssassin
> >>> has no binaries to create).  But if you place binaries in dist.a.o or
> >>> repository.a.o, they are effectively part of your release and must
> follow
> >>> the same rules.  (Votes, etc.)
> 
> 
> >>>
> >>>
> >>
>
>
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>
>


[jira] [Resolved] (MAPREDUCE-6910) MapReduceTrackingUriPlugin can not return the right URI of history server with HTTPS

2017-07-17 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-6910.
-
Resolution: Fixed

> MapReduceTrackingUriPlugin can not return the right URI of history server 
> with HTTPS
> 
>
> Key: MAPREDUCE-6910
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6910
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Affects Versions: 2.7.3, 2.8.1, 3.0.0-alpha3
>Reporter: Lantao Jin
>Assignee: Lantao Jin
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: MAPREDUCE-6910.001.patch, MAPREDUCE-6910.002.patch
>
>
> When the {{MapReduceTrackingUriPlugin}} enabled, the URI requests from proxy 
> server or RM UI which are also out of 
> {{yarn.resourcemanager.max-completed-applications}} should be redirect to the 
> history server URI.
> But when I access a HTTPS history server with the properties: 
> {quote}
> 
> mapreduce.jobhistory.http.policy
> HTTPS_ONLY
> 
> 
> mapreduce.jobhistory.webapp.https.address
> history.example.com:12345
> 
> {quote}
> The {{MapReduceTrackingUriPlugin}} still returns a default HTTP URI:
> {{http://0.0.0.0:19888}}
> or
> {{http://history.example.com:67890}}
> if {{mapreduce.jobhistory.webapp.address}} is engaged at same time.
> {quote}
> 
> mapreduce.jobhistory.webapp.address
> history.example.com:67890
> 
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



[jira] [Reopened] (MAPREDUCE-6910) MapReduceTrackingUriPlugin can not return the right URI of history server with HTTPS

2017-07-14 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash reopened MAPREDUCE-6910:
-

> MapReduceTrackingUriPlugin can not return the right URI of history server 
> with HTTPS
> 
>
> Key: MAPREDUCE-6910
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6910
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Affects Versions: 2.7.3, 2.8.1, 3.0.0-alpha3
>Reporter: Lantao Jin
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: MAPREDUCE-6910.001.patch, MAPREDUCE-6910.002.patch
>
>
> When the {{MapReduceTrackingUriPlugin}} enabled, the URI requests from proxy 
> server or RM UI which are also out of 
> {{yarn.resourcemanager.max-completed-applications}} should be redirect to the 
> history server URI.
> But when I access a HTTPS history server with the properties: 
> {quote}
> 
> mapreduce.jobhistory.http.policy
> HTTPS_ONLY
> 
> 
> mapreduce.jobhistory.webapp.https.address
> history.example.com:12345
> 
> {quote}
> The {{MapReduceTrackingUriPlugin}} still returns a default HTTP URI:
> {{http://0.0.0.0:19888}}
> or
> {{http://history.example.com:67890}}
> if {{mapreduce.jobhistory.webapp.address}} is engaged at same time.
> {quote}
> 
> mapreduce.jobhistory.webapp.address
> history.example.com:67890
> 
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-22 Thread Ravi Prakash
Thanks for all the effort Junping!

+1 (binding)
+ Verified signature and MD5, SHA1, SHA256 checksum of tarball
+ Verified SHA ID in git corresponds to RC3 tag
+ Verified wordcount for one small text file produces same output as
hadoop-2.7.3.
+ HDFS Namenode UI looks good.

I agree none of the issues reported so far are blockers. Looking forward to
another great release.

Thanks
Ravi

On Tue, Mar 21, 2017 at 8:10 PM, Junping Du  wrote:

> Thanks all for response with verification work and vote!
>
>
> Sounds like we are hitting several issues here, although none seems to be
> blockers so far. Given the large commit set - 2000+ commits first landed in
> branch-2 release, we may should follow 2.7.0 practice that to claim this
> release is not for production cluster, just like Vinod's suggestion in
> previous email. We should quickly come up with 2.8.1 release in next 1 or 2
> month for production deployment.
>
>
> We will close the vote in next 24 hours. For people who haven't vote,
> please keep on verification work and report any issues if founded - I will
> check if another round of RC is needed based on your findings. Thanks!
>
>
> Thanks,
>
>
> Junping
>
>
> 
> From: Kuhu Shukla 
> Sent: Tuesday, March 21, 2017 3:17 PM
> Cc: Junping Du; common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> yarn-...@hadoop.apache.org; mapreduce-dev@hadoop.apache.org
> Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
>
>
> +1 (non-binding)
>
> - Verified signatures.
> - Downloaded and built from source tar.gz.
> - Deployed a pseudo-distributed cluster on Mac Sierra.
> - Ran example Sleep job successfully.
> - Deployed latest Apache Tez 0.9 and ran sample Tez orderedwordcount
> successfully.
>
> Thank you Junping and everyone else who worked on getting this release out.
>
> Warm Regards,
> Kuhu
> On Tuesday, March 21, 2017, 3:42:46 PM CDT, Eric Badger
>  wrote:
> +1 (non-binding)
>
> - Verified checksums and signatures of all files
> - Built from source on MacOS Sierra via JDK 1.8.0 u65
> - Deployed single-node cluster
> - Successfully ran a few sample jobs
>
> Thanks,
>
> Eric
>
> On Tuesday, March 21, 2017 2:56 PM, John Zhuge 
> wrote:
>
>
>
> +1. Thanks for the great effort, Junping!
>
>
>   - Verified checksums and signatures of the tarballs
>   - Built source code with Java 1.8.0_66-b17 on Mac OS X 10.12.3
>   - Built source and native code with Java 1.8.0_111 on Centos 7.2.1511
>   - Cloud connectors:
>   - s3a: integration tests, basic fs commands
>   - adl: live unit tests, basic fs commands. See notes below.
>   - Deployed a pseudo cluster, passed the following sanity tests in
>   both insecure and SSL mode:
>   - HDFS: basic dfs, distcp, ACL commands
>   - KMS and HttpFS: basic tests
>   - MapReduce wordcount
>   - balancer start/stop
>
>
> Needs the following JIRAs to pass all ADL tests:
>
>   - HADOOP-14205. No FileSystem for scheme: adl. Contributed by John Zhuge.
>   - HDFS-11132. Allow AccessControlException in contract tests when
>   getFileStatus on subdirectory of existing files. Contributed by
> Vishwajeet
>   Dusane
>   - HADOOP-13928. TestAdlFileContextMainOperationsLive.testGetFileContext1
>   runtime error. (John Zhuge via lei)
>
>
> On Mon, Mar 20, 2017 at 10:31 AM, John Zhuge  wrote:
>
> > Yes, it only affects ADL. There is a workaround of adding these 2
> > properties to core-site.xml:
> >
> >  
> >fs.adl.impl
> >org.apache.hadoop.fs.adl.AdlFileSystem
> >  
> >
> >  
> >fs.AbstractFileSystem.adl.impl
> >org.apache.hadoop.fs.adl.Adl
> >  
> >
> > I have the initial patch ready but hitting these live unit test failures:
> >
> > Failed tests:
> >
> > TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.
> > testListStatus:257
> > expected:<1> but was:<10>
> >
> > Tests in error:
> >
> > TestAdlFileContextMainOperationsLive>FileContextMainOperationsBaseTest.
> > testMkdirsFailsForSubdirectoryOfExistingFile:254
> > » AccessControl
> >
> > TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.
> > testMkdirsFailsForSubdirectoryOfExistingFile:190
> > » AccessControl
> >
> >
> > Stay tuned...
> >
> > John Zhuge
> > Software Engineer, Cloudera
> >
> > On Mon, Mar 20, 2017 at 10:02 AM, Junping Du 
> wrote:
> >
> > > Thank you for reporting the issue, John! Does this issue only affect
> ADL
> > > (Azure Data Lake) which is a new feature for 2.8 rather than other
> > existing
> > > FS? If so, I think we can leave the fix to 2.8.1 to fix given this is
> > not a
> > > regression and just a new feature get broken.?
> > >
> > >
> > > Thanks,
> > >
> > >
> > > Junping
> > > --
> > > *From:* John Zhuge 
> > > *Sent:* Monday, March 20, 2017 9:07 AM
> > > *To:* Junping Du
> > > *Cc:* common-...@hadoop.apache.org; 

[jira] [Created] (MAPREDUCE-6810) hadoop-mapreduce-client-nativetask compilation broken on GCC-6.2.1

2016-11-15 Thread Ravi Prakash (JIRA)
Ravi Prakash created MAPREDUCE-6810:
---

 Summary: hadoop-mapreduce-client-nativetask compilation broken on 
GCC-6.2.1
 Key: MAPREDUCE-6810
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6810
 Project: Hadoop Map/Reduce
  Issue Type: Task
Affects Versions: 3.0.0-alpha1
Reporter: Ravi Prakash
Assignee: Ravi Prakash


I recently upgraded from Fedora 22 to Fedora 25 (I'm assuming this means the 
latest and greatest compilers, cmake etc.) My trunk build failed with this 
error:
{code}
[WARNING] 
/home/raviprak/Code/hadoop/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/Log.h:35:67:
 error: unable to find string literal operator ‘operator""_fmt_’ with ‘const 
char [37]’, ‘long unsigned int’ arguments
[WARNING]  fprintf(LOG_DEVICE, "%02d/%02d/%02d %02d:%02d:%02d INFO 
"_fmt_"\n", \
{code}

https://access.redhat.com/documentation/en-US/Red_Hat_Developer_Toolset/3/html/User_Guide/sect-Changes_in_Version_3.0-GCC.html
bq.This applies to any string literal followed without white space by some 
macro. To fix this, add some white space between the string literal and the 
macro name. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



Re: Apache MSDN Offer is Back

2016-07-20 Thread Ravi Prakash
Thanks Chris!

I did avail of the offer a few months ago, and wasn't able to figure out if
a windows license was also available. I want to run windows inside a
virtual machine on my Linux laptop, for the rare cases that there are
patches that may affect that. Any clue if that is possible?

Thanks
Ravi

On Tue, Jul 19, 2016 at 4:09 PM, Chris Nauroth 
wrote:

> A few months ago, we learned that the offer for ASF committers to get an
> MSDN license had gone away.  I'm happy to report that as of a few weeks
> ago, that offer is back in place.  For more details, committers can check
> out https://svn.apache.org/repos/private/committers and read
> donated-licenses/msdn.txt.
>
> --Chris Nauroth
>


Re: [DICUSS] Upgrading Guice to 4.0(HADOOP-12064)

2016-07-05 Thread Ravi Prakash
Go Go Go! Thanks for all the upgrade work Tsuyoshi!

On Thu, Jun 30, 2016 at 12:03 PM, Tsuyoshi Ozawa  wrote:

> Thanks, Andrew.
>
> Based on discussion here, I would like to merge it into *trunk* if
> there are no objection tomorrow.
>
> Thanks,
> - Tsuyoshi
>
> On Wed, Jun 29, 2016 at 12:28 PM, Andrew Wang 
> wrote:
> > I think it's okay to merge. We've already bumped other deps in trunk.
> >
> > On Wed, Jun 29, 2016 at 12:27 PM, Tsuyoshi Ozawa 
> wrote:
> >>
> >> I forgot to mention about importance point: it's a blocker issue to
> >> compile Hadoop with JDK8. Hence, we need to merge it on both client
> >> side and server slide anyway.
> >>
> >> Thanks,
> >> - Tsuyoshi
> >>
> >> On Wed, Jun 29, 2016 at 12:24 PM, Tsuyoshi Ozawa 
> wrote:
> >> > Thanks Vinod, Sangjin, Sean for your comment.
> >> >
> >> > Okay, I will take a look at the class path isolation.
> >> > Should I postpone to merge Guice upgrade to trunk? IMHO, it works with
> >> > tests, so it's okay to merge to runk. Thoughts?
> >> >
> >> > - Tsuyoshi
> >> >
> >> > On Wed, Jun 29, 2016 at 12:10 PM, Sangjin Lee 
> wrote:
> >> >> Yeah it would be awesome if we can get feedback and/or suggestions on
> >> >> these
> >> >> JIRAs (HADOOP-11804 and HADOOP-13070).
> >> >>
> >> >> Thanks,
> >> >> Sangjin
> >> >>
> >> >> On Wed, Jun 29, 2016 at 10:55 AM, Sean Busbey 
> >> >> wrote:
> >> >>>
> >> >>> At the very least, I'm running through an updated shaded hadoop
> client
> >> >>> this week[1] (HBase is my test application and it wandered onto some
> >> >>> private things that broke in branch-2). And Sangjin has a good lead
> on
> >> >>> an lower-short-term-cost incremental improvement for runtime
> isolation
> >> >>> of apps built on yarn/mapreduce[2]. He's been patiently waiting for
> >> >>> more review feedback.
> >> >>>
> >> >>>
> >> >>> [1]: https://issues.apache.org/jira/browse/HADOOP-11804
> >> >>> [2]: https://issues.apache.org/jira/browse/HADOOP-13070
> >> >>>
> >> >>> On Wed, Jun 29, 2016 at 12:33 PM, Vinod Kumar Vavilapalli
> >> >>>  wrote:
> >> >>> > My strong expectation is that we’ll have a version of classpath
> >> >>> > isolation in our first release of 3.x. I’m planning to spending
> some
> >> >>> > cycles
> >> >>> > right away on this.
> >> >>> >
> >> >>> > Assuming classpath isolation gets in, it is reasonable to bump up
> >> >>> > our
> >> >>> > dependencies like Jetty / Guice to the latest stable versions.
> >> >>> >
> >> >>> > Thanks
> >> >>> > +Vinod
> >> >>> >
> >> >>> >> On Jun 27, 2016, at 6:01 AM, Tsuyoshi Ozawa 
> >> >>> >> wrote:
> >> >>> >>
> >> >>> >> Hi developers,
> >> >>> >>
> >> >>> >> I will plan to upgrade Google Guice dependency on trunk. The
> change
> >> >>> >> also includes asm and cglib upgrade.
> >> >>> >> I checked following points:
> >> >>> >>
> >> >>> >> * Both HDFS and YARN UIs work well.
> >> >>> >> * All webIU-related tests pass as described on HADOOP-12064.
> >> >>> >> * Ran mapreduce job, and it works well.
> >> >>> >>
> >> >>> >> https://issues.apache.org/jira/browse/HADOOP-12064
> >> >>> >>
> >> >>> >> Do you have any concern or opinion?  I would like to merge it to
> >> >>> >> trunk
> >> >>> >> on this Friday if you have no objections.
> >> >>> >>
> >> >>> >> Best,
> >> >>> >> - Tsuyoshi
> >> >>> >>
> >> >>> >>
> >> >>> >>
> -
> >> >>> >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >> >>> >> For additional commands, e-mail:
> common-dev-h...@hadoop.apache.org
> >> >>> >>
> >> >>> >>
> >> >>> >
> >> >>> >
> >> >>> >
> >> >>> >
> -
> >> >>> > To unsubscribe, e-mail:
> mapreduce-dev-unsubscr...@hadoop.apache.org
> >> >>> > For additional commands, e-mail:
> >> >>> > mapreduce-dev-h...@hadoop.apache.org
> >> >>> >
> >> >>>
> >> >>>
> >> >>>
> >> >>> --
> >> >>> busbey
> >> >>>
> >> >>>
> -
> >> >>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >> >>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >> >>>
> >> >>
> >>
> >> -
> >> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> >> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >>
> >
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


Re: Why there are so many revert operations on trunk?

2016-06-07 Thread Ravi Prakash
Lolz!

Thanks for your opinion Larry. I have often seen "-1 until this is done
according to my way rather than your way" (obviously not in those words),
even when both ways are perfectly reasonable. Anyway, I didn't expect the
voting rules to change. :-)

Cheers
Ravi

On Tue, Jun 7, 2016 at 11:02 AM, larry mccay <larry.mc...@gmail.com> wrote:

> -1 needs not be a taken as a derogatory statement being a number should
> actually make it less emotional.
> It is dangerous to a community to become oversensitive to it.
>
> I generally see language such as "I am -1 on this until this particular
> thing is fixed" or that it violates some common pattern or precedence set
> in the project. This is perfectly reasonable language and there is no
> reason to make the reviewer provide an alternative.
>
> So, I am giving my -1 to any proposal for rule changes on -1 votes. :)
>
>
> On Tue, Jun 7, 2016 at 1:15 PM, Ravi Prakash <ravihad...@gmail.com> wrote:
>
>> +1 on being more respectful. We seem to be having a lot of distasteful
>> discussions recently. If we fight each other, we are only helping our
>> competitors win (and trust me, its out there).
>>
>> I would also respectfully request people not to throw -1s around. I have
>> faced this a few times and its really frustrating. Every one has opinions
>> and some times different people can't fathom why someone else thinks the
>> way they do. I am pretty sure none of us is acting with malicious intent,
>> so perhaps a little more tolerance, faith and trust will help all of us
>> improve Hadoop and the ecosystem much faster. That's not to say that
>> sometimes -1s are not warranted, but we should look to it as an extreme
>> measure. Unfortunately there is very little disincentive right now to vote
>> -1 . Maybe we should modify the rules. if you vote -1 , you have to
>> come up with an alternative implementation? (perhaps limit the amount of
>> time you have to the amount already spent in producing the patch that you
>> are against)?
>>
>> Just my 2 cents
>> Ravi
>>
>>
>> On Tue, Jun 7, 2016 at 7:54 AM, Junping Du <j...@hortonworks.com> wrote:
>>
>> > - We need to at the least force a reset of expectations w.r.t how trunk
>> > and small / medium / incompatible changes there are treated. We should
>> hold
>> > off making a release off trunk before this gets fully discussed in the
>> > community and we all reach a consensus.
>> >
>> > +1. We should hold off any release work off trunk before we reach a
>> > consensus. Or more and more developing work/features could be affected
>> just
>> > like Larry mentioned.
>> >
>> >
>> > - Reverts (or revert and move to a feature-branch) shouldn’t have been
>> > unequivocally done without dropping a note / informing everyone /
>> building
>> > consensus.
>> >
>> > Agree. To revert commits from other committers, I think we need to: 1)
>> > provide technical evidence/reason that is solid as rack, like: break
>> > functionality, tests, API compatibility, or significantly offend code
>> > convention, etc. 2) Making consensus with related
>> contributors/committers
>> > based on these technical reasons/evidences. Unfortunately, I didn't see
>> we
>> > ever do either thing in this case.
>> >
>> >
>> > - Freaking out on -1’s and reverts - we as a community need to be less
>> > stigmatic about -1s / reverts.
>> >
>> > +1. As a community, I believe we all prefer to work in a more friendly
>> > environment. In many cases, -1 without solid reason will frustrate
>> people
>> > who are doing contributions. I think we should restraint our -1 unless
>> it
>> > is really necessary.
>> >
>> >
>> >
>> > Thanks,
>> >
>> >
>> > Junping
>> >
>> >
>> > 
>> > From: Vinod Kumar Vavilapalli <vino...@apache.org>
>> > Sent: Monday, June 06, 2016 9:36 PM
>> > To: Andrew Wang
>> > Cc: Junping Du; Aaron T. Myers; common-...@hadoop.apache.org;
>> > hdfs-...@hadoop.apache.org; mapreduce-dev@hadoop.apache.org;
>> > yarn-...@hadoop.apache.org
>> > Subject: Re: Why there are so many revert operations on trunk?
>> >
>> > Folks,
>> >
>> > It is truly disappointing how we are escalating situations that can be
>> > resolved through basic communication.
>> >
>> > Things that shouldn’t have happened
>> >

Re: Why there are so many revert operations on trunk?

2016-06-07 Thread Ravi Prakash
+1 on being more respectful. We seem to be having a lot of distasteful
discussions recently. If we fight each other, we are only helping our
competitors win (and trust me, its out there).

I would also respectfully request people not to throw -1s around. I have
faced this a few times and its really frustrating. Every one has opinions
and some times different people can't fathom why someone else thinks the
way they do. I am pretty sure none of us is acting with malicious intent,
so perhaps a little more tolerance, faith and trust will help all of us
improve Hadoop and the ecosystem much faster. That's not to say that
sometimes -1s are not warranted, but we should look to it as an extreme
measure. Unfortunately there is very little disincentive right now to vote
-1 . Maybe we should modify the rules. if you vote -1 , you have to
come up with an alternative implementation? (perhaps limit the amount of
time you have to the amount already spent in producing the patch that you
are against)?

Just my 2 cents
Ravi


On Tue, Jun 7, 2016 at 7:54 AM, Junping Du  wrote:

> - We need to at the least force a reset of expectations w.r.t how trunk
> and small / medium / incompatible changes there are treated. We should hold
> off making a release off trunk before this gets fully discussed in the
> community and we all reach a consensus.
>
> +1. We should hold off any release work off trunk before we reach a
> consensus. Or more and more developing work/features could be affected just
> like Larry mentioned.
>
>
> - Reverts (or revert and move to a feature-branch) shouldn’t have been
> unequivocally done without dropping a note / informing everyone / building
> consensus.
>
> Agree. To revert commits from other committers, I think we need to: 1)
> provide technical evidence/reason that is solid as rack, like: break
> functionality, tests, API compatibility, or significantly offend code
> convention, etc. 2) Making consensus with related contributors/committers
> based on these technical reasons/evidences. Unfortunately, I didn't see we
> ever do either thing in this case.
>
>
> - Freaking out on -1’s and reverts - we as a community need to be less
> stigmatic about -1s / reverts.
>
> +1. As a community, I believe we all prefer to work in a more friendly
> environment. In many cases, -1 without solid reason will frustrate people
> who are doing contributions. I think we should restraint our -1 unless it
> is really necessary.
>
>
>
> Thanks,
>
>
> Junping
>
>
> 
> From: Vinod Kumar Vavilapalli 
> Sent: Monday, June 06, 2016 9:36 PM
> To: Andrew Wang
> Cc: Junping Du; Aaron T. Myers; common-...@hadoop.apache.org;
> hdfs-...@hadoop.apache.org; mapreduce-dev@hadoop.apache.org;
> yarn-...@hadoop.apache.org
> Subject: Re: Why there are so many revert operations on trunk?
>
> Folks,
>
> It is truly disappointing how we are escalating situations that can be
> resolved through basic communication.
>
> Things that shouldn’t have happened
> - After a few objections were raised, commits should have simply stopped
> before restarting again but only after consensus
> - Reverts (or revert and move to a feature-branch) shouldn’t have been
> unequivocally done without dropping a note / informing everyone / building
> consensus. And no, not even a release-manager gets this free pass. Not on
> branch-2, not on trunk, not anywhere.
> - Freaking out on -1’s and reverts - we as a community need to be less
> stigmatic about -1s / reverts.
>
> Trunk releases:
> This is the other important bit about huge difference of expectations
> between the two sides w.r.t trunk and branching. Till now, we’ve never made
> releases out of trunk. So in-progress features that people deemed to not
> need a feature branch could go into trunk without much trouble. Given that
> we are now making releases off trunk, I can see (a) the RM saying "no,
> don’t put in-progress stuff and (b) the contributors saying “no we don’t
> want the overhead of a branch”. I’ve raised related topics (but only
> focusing on incompatible changes) before -
> http://markmail.org/message/m6x73t6srlchywsn - but we never decided
> anything.
>
> We need to at the least force a reset of expectations w.r.t how trunk and
> small / medium / incompatible changes there are treated. We should hold off
> making a release off trunk before this gets fully discussed in the
> community and we all reach a consensus.
>
> * Without a user API, there's no way for people to use it, so not much
> advantage to having it in a release
>
> Since the code is separate and probably won't break any existing code, I
> won't -1 if you want to include this in a release without a user API, but
> again, I question the utility of including code that can't be used.
>
> Clearly, there are two sides to this argument. One side claims the absence
> of user-facing public / stable APIs, and that for all purposes this is
> dead-code for everyone other 

Re: ASF OS X Build Infrastructure

2016-05-20 Thread Ravi Prakash
FWIW, I was able to get a response from the form last month. I was issued a
new MSDN subscriber ID using which I could have downloaded Microsoft Visual
Studio (and some other products, I think). I was interested in downloading
an image of Windows to run in a VM, but the downloader is. wait for
it. an exe file :-) Haven't gotten around to begging someone with a
Windows OS to run that image downloader.

On Fri, May 20, 2016 at 10:39 AM, Sean Busbey  wrote:

> Some talk about the MSDN-for-committers program recently passed by on a
> private
> list. It's still active, it just changed homes within Microsoft. The
> info should still be in the committer repo. If something is amiss
> please let me know and I'll pipe up to the folks already plugged in to
> confirming it's active.
>
> On Fri, May 20, 2016 at 12:13 PM, Chris Nauroth
>  wrote:
> > It's very disappointing to see that vanish.  I'm following up to see if I
> > can learn more about what happened or if I can do anything to help
> > reinstate it.
> >
> > --Chris Nauroth
> >
> >
> >
> >
> > On 5/20/16, 6:11 AM, "Steve Loughran"  wrote:
> >
> >>
> >>> On 20 May 2016, at 10:40, Lars Francke  wrote:
> >>>
> 
>  Regarding lack of personal access to anything but Linux, I'll take
> this as
>  an opportunity to remind everyone that ASF committers (not just
> limited to
>  Hadoop committers) are entitled to a free MSDN license, which can get
> you
>  a Windows VM for validating Windows issues and any patches that touch
>  cross-platform concerns, like the native code.  Contributors who are
> not
>  committers still might struggle to get access to Windows, but all of
> us
>  reviewing and committing patches do have access.
> 
> >>>
> >>> Actually, from all I can tell this MSDN offer has been discontinued for
> >>> now. All the information has been removed from the committers repo. Do
> >>>you
> >>> have any more up to date information on this?
> >>>
> >>
> >>
> >>That's interesting.
> >>
> >>I did an SVN update and it went away..looks like something happened on
> >>April 26
> >>
> >>No idea, though the svn log has a bit of detail
> >>
> >>-
> >>To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> >>For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
> >>
> >>
> >
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
>
>
>
> --
> busbey
>
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>
>


Re: [DISCUSS] Set minimum version of Hadoop 3 to JDK8 (HADOOP-11858)

2016-05-10 Thread Ravi Prakash
+1. Thanks for driving this Akira

On Tue, May 10, 2016 at 10:25 AM, Tsuyoshi Ozawa  wrote:

> > Before cutting 3.0.0-alpha RC, I'd like to drop JDK7 support in trunk.
>
> Sounds good. To do so, we need to check the blockers of 3.0.0-alpha
> RC, especially upgrading all dependencies which use refractions at
> first.
>
> Thanks,
> - Tsuyoshi
>
> On Tue, May 10, 2016 at 8:32 AM, Akira AJISAKA
>  wrote:
> > Hi developers,
> >
> > Before cutting 3.0.0-alpha RC, I'd like to drop JDK7 support in trunk.
> > Given this is a critical change, I'm thinking we should get the consensus
> > first.
> >
> > One concern I think is, when the minimum version is set to JDK8, we need
> to
> > configure Jenkins to disable multi JDK test only in trunk.
> >
> > Any thoughts?
> >
> > Thanks,
> > Akira
> >
> > -
> > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> >
>
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>
>


[jira] [Resolved] (MAPREDUCE-6683) Execute hadoop 1.0.1 application in hadoop 2.6.0 cause Output directory not set execption

2016-04-20 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-6683.
-
Resolution: Invalid

> Execute hadoop 1.0.1 application in hadoop 2.6.0 cause Output directory not 
> set execption
> -
>
> Key: MAPREDUCE-6683
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6683
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
> Environment: Linux Ubuntu 12.04, hadoop 2.6.0 
>Reporter: Han Gao
>Priority: Minor
>
> The application can run normally in Hadoop 1.0.1 but can't run in 2.6.0 even 
> though adapt to use new mapreduce API. 
> org.apache.hadoop.mapred.InvalidJobConfException: Output directory not set.
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:128)
>   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:889)
>   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
>   at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
>   at 
> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:336)
>   at 
> org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:233)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: CHANGES.txt is gone from trunk, branch-2, branch-2.8

2016-03-09 Thread Ravi Prakash
Yaayy!! +1

On Tue, Mar 8, 2016 at 10:59 AM, Colin P. McCabe  wrote:

> +1
>
> Thanks, Andrew.  This will avoid so many spurious conflicts when
> cherry-picking changes, and so much wasted time on commit.
>
> best,
> Colin
>
> On Thu, Mar 3, 2016 at 9:11 PM, Andrew Wang 
> wrote:
> > Hi all,
> >
> > With the inclusion of HADOOP-12651 going back to branch-2.8, CHANGES.txt
> > and release notes are now generated by Yetus. I've gone ahead and deleted
> > the manually updated CHANGES.txt from trunk, branch-2, and branch-2.8
> > (HADOOP-11792). Many thanks to Allen for the releasedocmaker.py rewrite,
> > and the Yetus integration.
> >
> > I'll go ahead and update the HowToCommit and HowToRelease wiki pages, but
> > at a high-level, this means we no longer need to edit CHANGES.txt on new
> > commit, streamlining our commit process. CHANGES.txt updates will still
> be
> > necessary for backports to older release lines like 2.6.x and 2.7.x.
> >
> > Happy committing!
> >
> > Best,
> > Andrew
>


Re: Looking to a Hadoop 3 release

2016-02-19 Thread Ravi Prakash
+1 for the plan to start cutting 3.x alpha releases. Thanks for the
initiative Andrew!

On Fri, Feb 19, 2016 at 6:19 AM, Steve Loughran 
wrote:

>
> > On 19 Feb 2016, at 11:27, Dmitry Sivachenko  wrote:
> >
> >
> >> On 19 Feb 2016, at 01:35, Andrew Wang  wrote:
> >>
> >> Hi all,
> >>
> >> Reviving this thread. I've seen renewed interest in a trunk release
> since
> >> HDFS erasure coding has not yet made it to branch-2. Along with JDK8,
> the
> >> shell script rewrite, and many other improvements, I think it's time to
> >> revisit Hadoop 3.0 release plans.
> >>
> >
>
> It's time to start ... I suspect it'll take a while to stabilise. I look
> forward to the new shell scripts already
>
> One thing I do want there is for all the alpha releases to make clear that
> there are no compatibility policies here; protocols may change and there is
> no requirement of the first 3.x release to be compatible with all the 3.0.x
> alphas. That's something we missed out on the 2.0.x-alpha process, or at
> least not repeated often enough.
>
> >
> > Hello,
> >
> > any chance IPv6 support (HADOOP-11890) will be finished before 3.0 comes
> out?
> >
> > Thanks!
> >
> >
>
> sounds like a good time for a status update on the FB work —and anything
> people can do to test it would be appreciated by all. That includes testing
> on ipv4 systems, and especially, IPv4/v6 systems with Kerberos turned on
> and both MIT and AD kerberos servers. At the same time, IPv6 support ought
> to be something that could be added in.
>
>
> I don't have any opinions on timescale, but
>
> +1 to anything related to classpath isolation
> +1 to a careful bump of versions of dependencies.
> +1 to fixing the outstanding Java 8 migration issues, especially the big
> Jersey patch that's just been updated.
> +1 to switching to JIRA-created release notes
>
> Having been doing the slider releases recently, it's clear to me that you
> can do a lot in automating the release process itself. All those steps in
> the release runbook can be turned into targets in a special ant release.xml
> build file, calling maven, gpg, etc.
>
> I think doing something like this for 3.0 will significantly benefit both
> the release phase here but the future releases
>
> This is the slider one:
> https://github.com/apache/incubator-slider/blob/develop/bin/release.xml
>
> It doesn't replace maven, instead it choreographs that along with all the
> other steps: signing and checksumming artifacts, publishing them, voting
>
> it includes
>  -refusing to release if the git repo is modified
>  -making the various git branch/tag/push operations
>  -issuing the various mvn versions:update commands
>  -signing
>  -publishing via asf SVN
>  -using GET calls too verify the artifacts made it
>  -generating the vote and vote result emails (it even counts the votes)
>
> I recommend this is included as part of the release process. It does make
> a difference; we can now cut new releases with no human intervention other
> than editing a properties file and running different targets as the process
> goes through its release and vote phases.
>
> -Steve


[jira] [Resolved] (MAPREDUCE-5074) Remove limits on number of counters and counter groups in MapReduce

2015-05-18 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-5074.
-
Resolution: Won't Fix

We can re-open this if we find users compelling us to increase the limits

 Remove limits on number of counters and counter groups in MapReduce
 ---

 Key: MAPREDUCE-5074
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5074
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mr-am, mrv2
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ravi Prakash

 Can we please consider removing limits on the number of counters and counter 
 groups now that it is all user code? Thanks to the much better architecture 
 of YARN in which there is no single Job Tracker we have to worry about 
 overloading, I feel we should do away with this (now arbitrary) constraint on 
 users' capabilities. Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-3010) ant mvn-install doesn't work on hadoop-mapreduce-project

2015-05-18 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-3010.
-
Resolution: Invalid

We have moved to maven a long time since

 ant mvn-install doesn't work on hadoop-mapreduce-project
 

 Key: MAPREDUCE-3010
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3010
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Ravi Prakash

 Even though ant jar works, ant mvn-install fails in the compile-fault-inject 
 step



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-3094) org.apache.hadoop.streaming.TestUlimit.testCommandLine fails intermittantly in 20.205.0

2015-03-19 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-3094.
-
Resolution: Won't Fix

 org.apache.hadoop.streaming.TestUlimit.testCommandLine fails intermittantly 
 in 20.205.0
 ---

 Key: MAPREDUCE-3094
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3094
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/streaming
Affects Versions: 0.20.205.0
Reporter: Nathan Roberts

 11/09/24 00:22:10 INFO mapred.TaskInProgress: Error from 
 attempt_20110924002157563_0001_m_00_0: java.lang.RuntimeException: 
 PipeMapRed.waitOutputThreads(): subprocess failed with code 134
   at 
 org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:311)
   at 
 org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545)
   at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:132)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
   at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:261)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
   at org.apache.hadoop.mapred.Child.main(Child.java:255)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-3663) After submitting a job. If the Runjar process gets killed ,then the job is hanging

2015-03-19 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-3663.
-
Resolution: Cannot Reproduce

 After submitting a job. If the Runjar process gets killed ,then the job is 
 hanging
 --

 Key: MAPREDUCE-3663
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3663
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ramgopal N

 When the job is submitted...Runjar process is created and the YarnChild 
 processes also start running.If at this time ,the RunJar process is getting 
 killed, the job is hanging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-3201) Even though jobs are getting failed on particular NM, it is not getting blacklisted

2015-03-19 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-3201.
-
Resolution: Fixed

Please reopen if this is still an issue

 Even though jobs are getting failed on particular NM, it is not getting 
 blacklisted
 ---

 Key: MAPREDUCE-3201
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3201
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ramgopal N
Priority: Minor

 {code:xml}
 The yarnchild process on a particular NM are getting killed continuosly. 
 Still the NM is not getting blacklisted
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: 2.7 status

2015-02-14 Thread Ravi Prakash
I would like the improvements to the Namenode UI be included in 2.7 too. 
HDFS-7588. All the code is ready and we can try to get as much of it in as 
possible piecemeal.
 

 On Saturday, February 14, 2015 3:52 AM, Steve Loughran 
ste...@hortonworks.com wrote:
   

 


On 14 February 2015 at 00:37:07, Karthik Kambatla 
(ka...@cloudera.commailto:ka...@cloudera.com) wrote:

2 weeks from now (end of Feb) sounds reasonable. The one feature I would
like for to be included is shared-cache: we are pretty close - two more
main items to take care of.

In an offline conversation, Steve mentioned building Windows binaries for
our releases. Do we want to do that for 2.7? If so, can anyone with Windows
expertise setup a Jenkins job to build these artifacts, and may be hook it
up to https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/





someone will have to first fix MiniYarnCluster to come up on the ASF jenkins 
machines; it currently fails with directory permission setup problems that may 
matter in production, but not in test runs.




[jira] [Resolved] (MAPREDUCE-6220) To forbid stderr and stdout for MapReduce job

2015-01-26 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-6220.
-
Resolution: Not a Problem

 To forbid stderr and stdout for MapReduce job
 -

 Key: MAPREDUCE-6220
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6220
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 2.6.0
Reporter: Yang Hao
Assignee: Yang Hao
 Attachments: MAPREDUCE-6220.patch


 System.out and System.error is a ugly way to print log, and many times it 
 would do harm to Hadoop cluster. So we can forbid it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Mapreduce -shuffle and sort enhancement.

2014-11-18 Thread Ravi Prakash
Hi Nada!
Please visit https://github.com/apache/hadoop to clone the source code. Please 
familiarize yourself with git. You should be able to create branches in your 
local repository.
HTHRavi
  

 On Tuesday, November 18, 2014 7:47 AM, Nada Saif nada.sa...@gmail.com 
wrote:
   

 Thanks Ravi!
How to update and build the code ? I want to have two versions of Hadoop the 
old one and the updated one inside my machine -so I can test the performance of 
both,How to do that?
My regards,Nada
On Mon, Nov 17, 2014 at 10:17 PM, Ravi Prakash ravi...@ymail.com wrote:

Nada!
You can look at MergeManagerImpl.java



     On Saturday, November 15, 2014 12:14 PM, Nada Saif nada.sa...@gmail.com 
wrote:


 Hi,

I'm interested in contributing to Hadoop ,specially MapReduce shuffle and
sort.
I'm still novice ,can you please lead me through the code ,what part of
code I should focus on?

Also Ideas that I can work on -if possible!

Thanks  regards,
Nada


    



   

Re: Mapreduce -shuffle and sort enhancement.

2014-11-17 Thread Ravi Prakash
Nada!
You can look at MergeManagerImpl.java

 

 On Saturday, November 15, 2014 12:14 PM, Nada Saif nada.sa...@gmail.com 
wrote:
   

 Hi,

I'm interested in contributing to Hadoop ,specially MapReduce shuffle and
sort.
I'm still novice ,can you please lead me through the code ,what part of
code I should focus on?

Also Ideas that I can work on -if possible!

Thanks  regards,
Nada




Re: [VOTE] Release Apache Hadoop 2.6.0

2014-11-13 Thread Ravi Prakash
Thanks for the respin Arun!
I've verified all checksums, and tested that the DockerContainerExecutor was 
able to launch jobs.

I'm a +1 on the release
 

 On Thursday, November 13, 2014 3:09 PM, Arun C Murthy 
a...@hortonworks.com wrote:
   

 Folks,

I've created another release candidate (rc1) for hadoop-2.6.0 based on the 
feedback.

The RC is available at: http://people.apache.org/~acmurthy/hadoop-2.6.0-rc1
The RC tag in git is: release-2.6.0-rc1

The maven artifacts are available via repository.apache.org at 
https://repository.apache.org/content/repositories/orgapachehadoop-1013.

Please try the release and vote; the vote will run for the usual 5 days.

thanks,
Arun


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.




Re: [VOTE] Release Apache Hadoop 2.6.0

2014-11-11 Thread Ravi Prakash
Hi Arun!
We are very close to completion on YARN-1964 (DockerContainerExecutor). I'd 
also like HDFS-4882 to be checked in. Do you think these issues merit another 
RC?
ThanksRavi
 

 On Tuesday, November 11, 2014 11:57 AM, Steve Loughran 
ste...@hortonworks.com wrote:
   

 +1 binding

-patched slider pom to build against 2.6.0

-verified build did download, which it did at up to ~8Mbps. Faster than a
local build.

-full clean test runs on OS/X  Linux


Windows 2012:

Same thing. I did have to first build my own set of the windows native
binaries, by checking out branch-2.6.0; doing a native build, copying the
binaries and then purging the local m2 repository of hadoop artifacts to be
confident I was building against. For anyone who wants those native libs
they will be up on
https://github.com/apache/incubator-slider/tree/develop/bin/windows/ once
it syncs with the ASF repos.

afterwords: the tests worked!


On 11 November 2014 02:52, Arun C Murthy a...@hortonworks.com wrote:

 Folks,

 I've created a release candidate (rc0) for hadoop-2.6.0 that I would like
 to see released.

 The RC is available at:
 http://people.apache.org/~acmurthy/hadoop-2.6.0-rc0
 The RC tag in git is: release-2.6.0-rc0

 The maven artifacts are available via repository.apache.org at
 https://repository.apache.org/content/repositories/orgapachehadoop-1012.

 Please try the release and vote; the vote will run for the usual 5 days.

 thanks,
 Arun


 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.




[jira] [Resolved] (MAPREDUCE-6028) java.lang.ArithmeticException: / by zero

2014-08-08 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-6028.
-

Resolution: Invalid

 java.lang.ArithmeticException: / by zero
 

 Key: MAPREDUCE-6028
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6028
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 1.2.1
 Environment: hadoop version:1.2.1
Reporter: eagle

 Run any sql through hive with following 
 error message:
 2014-08-07 10:22:28,061 INFO org.apache.hadoop.mapred.TaskInProgress: Error 
 from attempt_201407251033_24476_m_02_0: Error initializing 
 attempt_201407251033_24476_m_02_0:
 java.lang.ArithmeticException: / by zero
 and after restart hadoop cluster,the problem resolved。how to find the root 
 casue for the problem? thks。



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: [VOTE] Release Apache Hadoop 2.4.1

2014-06-25 Thread Ravi Prakash

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

+1

Built and deployed clusters on Amazon. Ran a basic test suite.

Thanks Arun

On 06/25/14 17:11, Akira AJISAKA wrote:
 Thanks Arun for another RC!

 I'm +1 (non-binding) for RC2. HDFS-6527 should be reverted because the
issue is only in 2.5 and trunk. In addition, I hope HDFS-6591 to be merged.

 Other than that, RC1 is good to me. I tested RC1 with distributed
cluster on CentOS 6.3:

 - Successful build from src (including native library)
 - Successful RM automatic fail-over and running MapReduce job also
succeeded
 - Successful rolling upgrade HDFS from 2.4.0 to 2.4.1
 - Successful downgrade HDFS from 2.4.1 to 2.4.0
 - Documentation looks good

 Thanks,
 Akira

 (2014/06/23 9:59), Steve Loughran wrote:
 someone's filed a JIRA on loops in Hedged Read, with tests ...

 https://issues.apache.org/jira/browse/HDFS-6591


 On 23 June 2014 08:58, Mit Desai mitde...@yahoo-inc.com.invalid wrote:

 +1 (non-binding)

 Tested on: Fedora17
 -Successful build from src (including native)
 -Verified Signature
 -Deployed source to my single node cluster and ran couple of sample
MR jobs


 - M!T





 On 6/21/14, 1:51 AM, Arun C Murthy a...@hortonworks.com wrote:

 Folks,

 I've created another release candidate (rc1) for hadoop-2.4.1 based on
 the feedback that I would like to push out.

 The RC is available at:
 http://people.apache.org/~acmurthy/hadoop-2.4.1-rc1
 The RC tag in svn is here:
 https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.1-rc1

 The maven artifacts are available via repository.apache.org.

 Please try the release and vote; the vote will run for the usual 7
days.

 thanks,
 Arun



 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/hdp/



 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or
entity
 to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the
reader
 of this message is not the intended recipient, you are hereby notified
 that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender
 immediately
 and delete it from your system. Thank You.





-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJTq3XoAAoJEGunL/HJl4XexG0QAJnmxxfljAB2QWbkK5EfQNq3
DW8PkA2tZAyLrdCeMaBMOybnrrtHRUJpPloh34pm6aeFINIcdwjYlx/42Zfe5Wk8
24nLsDYad4Zai0N9hwOs1zk8z2Tj0cZbA3VoOqrexTnGQb6Z7O12yY/vRJ1iWanT
Qa2qCgrfleUoCxBwTBrtO68Z98EmJHOlWd8W6QyZpMZiKC/EsGpjkCTLZzXvirkF
5/u29HXo0yJT1xA8iCleOdp7MCTmnfCF7sLvCV2rLN2ERJPaPE19AXn8pFAqyqeH
9JVIO2SCXJbmTxIlKN/UFGgP/v0KaLleYupUltQ4CIM+omkbsBxLPN2vIHavoxup
/1w7JBfmq67RIX7AHkUJgS4Dzs+GOK81dpt2niEfu1dx7h7qq4eeAvLKImjIRlRi
EqKqxqWBoDAb6FGBPRHsJVXb2zxn1NAYVIYYD4AW27+S0OyrTvwQWwmurhjG+h45
XC5Z+jFG+FGc96On9DtNxSUTYB9a0GpBBnjU+u1enT99n3j0X5YGmi2B/ca4Cp9J
WQ4CDeQfp4+87LijF1ZH8ObQn7L0vWudehhcMjAC3qo9NK0oZ8eSMd48WaFCS0AI
/xdfJT069RN4U9633KoGT/HXIXf6pcULEc7kNCgqULjXZO7hGl2H6Q3hxCBh0Xs5
zA8LmbrvDThJYIwZRXRR
=rTDe
-END PGP SIGNATURE-



Re: hadoop-2.5 - June end?

2014-06-10 Thread Ravi Prakash
Does this also mean that there won't be a 2.4.1 Apache release?



On Tuesday, June 10, 2014 9:45 AM, Suresh Srinivas sur...@hortonworks.com 
wrote:
 


We should also include extended attributes feature for HDFS from HDFS-2006
for release 2.5.


On Mon, Jun 9, 2014 at 9:39 AM, Arun C Murthy a...@hortonworks.com wrote:

 Folks,

  As you can see from the Roadmap wiki, it looks like several items are
 still a bit away from being ready.

  I think rather than wait for them, it will be useful to create an
 intermediate release (2.5) this month - I think ATS security is pretty
 close, so we can ship that. I'm thinking of creating hadoop-2.5 by end of
 the month, with a branch a couple of weeks prior.

  Thoughts?

 thanks,
 Arun


 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.




-- 
http://hortonworks.com/download/


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: [VOTE] Release Apache Hadoop 2.3.0

2014-02-11 Thread Ravi Prakash
Thanks Arun for another release!

+1 non-binding

Verified signatures, deployed a single node cluster and ran sleep and 
wordcount. Everything looks fine.


Regards
Ravi




On Tuesday, February 11, 2014 5:36 PM, Travis Thompson tthomp...@linkedin.com 
wrote:
 
Everything looks good so far, running on 100 nodes with security enabled.

I've found two minor issues I've found with the new Namenode UI so far and will 
work on them over the next few days:

HDFS-5934
HDFS-5935

Thanks,

Travis


On Feb 11, 2014, at 4:53 PM, Mohammad Islam misla...@yahoo.com
wrote:

 Thanks Arun for the initiative.
 
 +1 non-binding.
 
 
 I tested the followings:
 1. Build package from the source tar.
 2. Verified with md5sum
 3. Verified with gpg 
 4. Basic testing
 
 Overall, good to go.
 
 Regards,
 Mohammad
 
 
 
 
 On Tuesday, February 11, 2014 2:07 PM, Chen He airb...@gmail.com wrote:
 
 +1, non-binding
 successful compiled on MacOS 10.7
 deployed to Fedora 7 and run test job without any problem.
 
 
 
 On Tue, Feb 11, 2014 at 8:49 AM, Arun C Murthy a...@hortonworks.com wrote:
 
 Folks,
 
 I've created a release candidate (rc0) for hadoop-2.3.0 that I would like
 to get released.
 
 The RC is available at:
 http://people.apache.org/~acmurthy/hadoop-2.3.0-rc0
 The RC tag in svn is here:
 https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.3.0-rc0
 
 The maven artifacts are available via repository.apache.org.
 
 Please try the release and vote; the vote will run for the usual 7 days.
 
 thanks,
 Arun
 
 PS: Thanks to Andrew, Vinod  Alejandro for all their help in various
 release activities.
 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.

Re: Doubt regarding hadoop simulator

2013-09-17 Thread Ravi Prakash
Suresh!

Rumen is used to generate a trace file from the job history files on a 
pre-existing cluster. This trace file can then be fed into gridmix (for 
example) to simulate the same workload on that cluster again (or another 
cluster for that matter). https://hadoop.apache.org/docs/stable/rumen.html. 
Rumen also allows you to specify a scaling factor, so if you were able to get a 
trace for a much bigger cluster, you could scale it down to run on a much 
smaller cluster.

I'm afraid I'm not familiar with Mumak. 

You might also be interested in this JIRA: 
https://issues.apache.org/jira/browse/YARN-1021 . Unfortunately, it hasn't been 
checked into the repository yet, so you will have to apply the patch yourself.

HTH
Ravi






 From: Suresh S suresh...@gmail.com
To: mapreduce-dev@hadoop.apache.org 
Sent: Tuesday, September 17, 2013 12:48 AM
Subject: Doubt regarding hadoop simulator
 

Hello,

     I am searching for MapReduce simulator in online.
I heared some names like *Rumen and Mumak.*
**
But unable to understand thease simulator.

I have modified some changes on Fair scheduling.
i want to run the simulation for same workload for original Fair scheduler
and modified fair scheduler. And see the difference in the response time,
fairness, locality and network traffic.

I dont have real enviornment to run my experiments.

Please help me in this regard. Is is possible with rumen and mumak?
Is there any other simulater available?

Thanks in Advance...

*Regards*
*S.Suresh,*
*Research Scholar,*
*Department of Computer Applications,*
*National Institute of Technology,*
*Tiruchirappalli - 620015.*
*+91-9941506562*

[jira] [Created] (MAPREDUCE-5317) Stale files left behind for failed jobs

2013-06-10 Thread Ravi Prakash (JIRA)
Ravi Prakash created MAPREDUCE-5317:
---

 Summary: Stale files left behind for failed jobs
 Key: MAPREDUCE-5317
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5317
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.8, 2.0.4-alpha, 3.0.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash


Courtesy [~amar_kamat]!
{quote}
We are seeing _temporary files left behind in the output folder if the job
fails.
The job were failed due to hitting quota issue.
I simply ran the randomwriter (from hadoop examples) with the default setting.
That failed and left behind some stray files.
{quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (MAPREDUCE-3135) Unit test org.apache.hadoop.mapred.TestJobHistoryServer fails intermittently

2013-06-05 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-3135.
-

  Resolution: Duplicate
Release Note: Presumably TestJobHistoryServer is working fine after 
MAPREDUCE-4798

 Unit test org.apache.hadoop.mapred.TestJobHistoryServer fails intermittently
 

 Key: MAPREDUCE-3135
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3135
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Ravi Prakash

 Every once in a while org.apache.hadoop.mapred.TestJobHistoryServer fails due 
 to a timeout.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-5074) Remove limits on number of counters and counter groups in MapReduce

2013-03-15 Thread Ravi Prakash (JIRA)
Ravi Prakash created MAPREDUCE-5074:
---

 Summary: Remove limits on number of counters and counter groups in 
MapReduce
 Key: MAPREDUCE-5074
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5074
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mr-am, mrv2
Affects Versions: 2.0.3-alpha, 3.0.0, 0.23.6
Reporter: Ravi Prakash


Can we please consider removing limits on the number of counters and counter 
groups now that it is all user code? Thanks to the much better architecture of 
YARN in which there is no single Job Tracker we have to worry about 
overloading, I feel we should do away with this (now arbitrary) constraint on 
users' capabilities. Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (MAPREDUCE-3779) Create hard and soft limits for job counters

2013-03-12 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-3779.
-

  Resolution: Won't Fix
Release Note: I'm marking this JIRA as won't fix. We can consider 
re-opening it if you propose a compelling use case

 Create hard and soft limits for job counters
 

 Key: MAPREDUCE-3779
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3779
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: jobtracker, tasktracker
Affects Versions: 0.23.0, 1.0.0
Reporter: Dave Shine
Priority: Minor

 The mapreduce.job.counters.limit is not overridable at the job level.  While 
 it is necessary to limit the number of counters to reduce overhead, there are 
 times when exceeding the limit is required.  Currently, the only solution is 
 to increase the limit cluster wide.
 I would like to see a soft limit set in the mapred-site.xml that can be 
 overridden at the job level, in addition to the hard limit that exists today.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Why In-memory Mapoutput is necessary in ReduceCopier

2013-03-11 Thread Ravi Prakash
Hi Ling,

Yes! It is because of performance concerns. We want to keep and merge map 
outputs in memory as much as we can. The amount of memory reserved for this 
purpose is configurable. Obviously storing fetched map outputs on disk, then 
reading them back from disk to merge them and then write out back to disk, is a 
lot more expensive than if it were done in memory. 

Please let us know if you find there was an opportunity to keep the map output 
in memory but we did not, and instead shuffled to disk.

Thanks
Ravi





 From: Ling Kun lkun.e...@gmail.com
To: mapreduce-dev@hadoop.apache.org 
Sent: Monday, March 11, 2013 5:27 AM
Subject: Why In-memory Mapoutput is necessary in ReduceCopier
 
Dear all,

     I am focusing on the Mapoutput copier implementation. This part of
code will try to get mapoutputs, and merge them into a file that can feed
to reduce functions. I have the following questions.

1. All the local file mapoutput data will be merged together by the
LocalFSMerge, and the in-memory mapout will be merged by
InMemFSMergeThread. For the InMemFSMergeThread, there is also a writer
object   which write the result to outputPath ( ReduceTask.java Line 2843).
It seems after merging, in-memory mapoutput and local file mapoutput data
will all be stored in local file system. Why not just using the local file
for all mapoutput data.

2. After using http to get  some fragment of a map output file, some of the
mapoutput data will be selected and keep in memory, while others are
directly write to local disk of reducers. Which mapoutput wil be kept in
memory is determined in MapOutputCopier.getMapOutput(), this method will
call ramManager.canFitInMemory().  why not store all the data to disk?

3. According to the comment, Hadoop will put a file in memory if it meets:
a, the size of the (decompressed) file should be less than 25% of the total
inmem fs; b, there is space available in the inmem fs. Why ? Is it because
of the performance?



Thanks

yours,
Ling Kun

-- 
http://www.lingcc.com

[jira] [Created] (MAPREDUCE-4989) JSONify DataTables input data for Attempts page

2013-02-07 Thread Ravi Prakash (JIRA)
Ravi Prakash created MAPREDUCE-4989:
---

 Summary: JSONify DataTables input data for Attempts page
 Key: MAPREDUCE-4989
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4989
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: jobhistoryserver, mr-am
Affects Versions: 0.23.6
Reporter: Ravi Prakash
Assignee: Ravi Prakash


Use deferred rendering for the attempts page as was done in MAPREDUCE-4720. I'm 
sorry I didn't realize earlier that this table could be huge too. Thanks to 
[~jlowe] for pointing it out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-4786) Job End Notification retry interval is 5 milliseconds by default

2012-11-11 Thread Ravi Prakash (JIRA)
Ravi Prakash created MAPREDUCE-4786:
---

 Summary: Job End Notification retry interval is 5 milliseconds by 
default
 Key: MAPREDUCE-4786
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4786
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.4, 2.0.2-alpha, 3.0.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash


Courtesy [~stevenwillis] and [~qwertymaniac]
{quote}
From: Harsh J
I believe the configs of the latter of both of the above
classifications were meant to be added in as replacement names, but
the property names got added in wrong (as the former/older named ones)
in the XML.

the word seconds in the description of retries? The code in MR2's
JobEndNotifier seems to expect seconds but uses it directly in
Thread.sleep(…) without making it milliseconds, which may be a bug we
need to fix as well, perhaps in a same issue as the configs ones.

On Fri, Nov 9, 2012 at 11:21 PM, Steven Willis swil...@compete.com wrote:
 And I noticed that there are some duplicate properties with different values 
 and different descriptions:
{quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-4747) Fancy graphs for visualizing task progress

2012-10-24 Thread Ravi Prakash (JIRA)
Ravi Prakash created MAPREDUCE-4747:
---

 Summary: Fancy graphs for visualizing task progress
 Key: MAPREDUCE-4747
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4747
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.4
Reporter: Ravi Prakash


We should think about what kind of map / reduce graphs we want to see in MRv2 
to visualize all the task progress / completion information we have.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-4711) Append time elapsed since job-start-time for finished tasks

2012-10-05 Thread Ravi Prakash (JIRA)
Ravi Prakash created MAPREDUCE-4711:
---

 Summary: Append time elapsed since job-start-time for finished 
tasks
 Key: MAPREDUCE-4711
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4711
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver
Affects Versions: 0.23.3
Reporter: Ravi Prakash
Assignee: Ravi Prakash


In 0.20.x/1.x, the analyze job link gave this information

bq. The last Map task task_sometask finished at (relative to the Job launch 
time): 5/10 20:23:10 (1hrs, 27mins, 54sec)

The time it took for the last task to finish needs to be calculated mentally in 
0.23. I believe we should print it next to the finish time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-4645) Providing a random seed to Slive should make the sequence of filenames completely deterministic

2012-09-07 Thread Ravi Prakash (JIRA)
Ravi Prakash created MAPREDUCE-4645:
---

 Summary: Providing a random seed to Slive should make the sequence 
of filenames completely deterministic
 Key: MAPREDUCE-4645
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4645
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: performance, test
Affects Versions: 2.0.0-alpha, 0.23.1
Reporter: Ravi Prakash
Assignee: Ravi Prakash


Using the -random seed option still doesn't produce a deterministic sequence of 
filenames. Hence there's no way to replicate the performance test. If I'm 
providing a seed, its obvious that I want the test to be reproducible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-4606) TestMRJobs and TestUberAM fail if /mapred/history are not present

2012-08-29 Thread Ravi Prakash (JIRA)
Ravi Prakash created MAPREDUCE-4606:
---

 Summary: TestMRJobs and TestUberAM fail if /mapred/history are not 
present
 Key: MAPREDUCE-4606
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4606
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Ravi Prakash


This might be related to the test framework rather than those tests themselves. 
To make them pass, I had to create /mapred/history. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (MAPREDUCE-4135) MRAppMaster throws IllegalStateException while shutting down

2012-07-10 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-4135.
-

   Resolution: Duplicate
Fix Version/s: 0.23.3

Thanks Tucu! HADOOP-8325 seems to have fixed this issue. Duping this jira

 MRAppMaster throws IllegalStateException while shutting down
 

 Key: MAPREDUCE-4135
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4135
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Devaraj K
Assignee: Ravi Prakash
 Fix For: 0.23.3

 Attachments: MAPREDUCE-4135.branch-0.23.patch, MAPREDUCE-4135.patch, 
 MAPREDUCE-4135.patch


 Always MRAppMaster throws IllegalStateException in the stderr while shutting 
 down. It doesn't look good having this exception in the stderr file of 
 MRAppMaster container.
 {code:xml}
 WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use 
 org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
 Exception in thread Thread-1 java.lang.IllegalStateException: Shutdown in 
 progress
   at 
 java.lang.ApplicationShutdownHooks.remove(ApplicationShutdownHooks.java:55)
   at java.lang.Runtime.removeShutdownHook(Runtime.java:220)
   at org.apache.hadoop.fs.FileSystem$Cache.remove(FileSystem.java:2148)
   at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2180)
   at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2157)
   at org.apache.hadoop.fs.FileSystem.closeAll(FileSystem.java:361)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$MRAppMasterShutdownHook.run(MRAppMaster.java:1014)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4297) Usersmap file in gridmix should not fail on empty lines

2012-05-30 Thread Ravi Prakash (JIRA)
Ravi Prakash created MAPREDUCE-4297:
---

 Summary: Usersmap file in gridmix should not fail on empty lines
 Key: MAPREDUCE-4297
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4297
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/gridmix
Affects Versions: 0.23.1
Reporter: Ravi Prakash
Assignee: Ravi Prakash


An empty line (e.g. at the end of the file) in the usersmap file will cause 
gridmix to fail. Empty lines should be silently ignored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: What's the status of AspectJ files in branch-1?

2012-05-08 Thread Ravi Prakash
Thanks Cos!

I was able to run ant jar-system jar-test-system successfully with my
change. However, when I tried to run

ant test-system -Dhadoop.conf.dir.deployed=${HADOOP_CONF_DIR}

as mentioned in
https://wiki.apache.org/hadoop/HowToUseSystemTestFrameworkall the
streaming and gridmix tests had this error:
 java.lang.IllegalArgumentException: No Configuration passed for hadoop
home and hadoop conf directories
at
org.apache.hadoop.test.system.process.HadoopDaemonRemoteCluster.populateDirectories(HadoopDaemonRemoteCluster.java:206)
at
org.apache.hadoop.test.system.process.HadoopDaemonRemoteCluster.init(HadoopDaemonRemoteCluster.java:170)
at
org.apache.hadoop.mapreduce.test.system.MRCluster.createCluster(MRCluster.java:107)
at
org.apache.hadoop.mapred.gridmix.GridmixSystemTestCase.before(GridmixSystemTestCase.java:69)

I tried setting -Dtest.system.hdrc.hadoopconfdir=confDir
-Dtest.system.hdrc.hadoophome=hadoophome
-Dhadoop.conf.dir.deployed=confdir but the tests still failed.

I understand these are gridmix and streaming test failures, and may not be
directly related to the FI framework. But the dependency graph of
test-system is such that these tests are being run before FI tests and so
its failing before reaching FI tests. Are you able to run these
successfully, and if yes, how?

Thanks
Ravi


On Mon, May 7, 2012 at 5:47 PM, Konstantin Boudnik c...@apache.org wrote:

 Hi Ravi.

 You need to run Herriot build to make sure that everything is ok after your
 changes. The way to to it is as follows:

% ant jar-system jar-test-system

 this will perform the compilation of Hadoop binaries with weaved Herriot
 APIs.

 More information about Herriot can be found here
  https://wiki.apache.org/hadoop/HowToUseSystemTestFramework

 Cos

 On Mon, May 07, 2012 at 04:45PM, Ravi Prakash wrote:
  Hi folks,
 
  I'm patching changes to StatisticsCollector in
  https://issues.apache.org/jira/browse/MAPREDUCE-4227 .
 
  A simple grep shows StatisticsCollector is also referenced in
  src/test/system/aop/org/apache/hadoop/mapred/StatisticsCollectorAspect.aj
  src/test/system/aop/org/apache/hadoop/mapred/JobTrackerAspect.aj
 
  How can I check whether my changes cause any issues or not in the
 AspectJ?
  (tests?)
 
  Thanks
  Ravi.



[jira] [Created] (MAPREDUCE-4227) TimeWindow statistics are not updated for TaskTrackers which have been restarted.

2012-05-04 Thread Ravi Prakash (JIRA)
Ravi Prakash created MAPREDUCE-4227:
---

 Summary: TimeWindow statistics are not updated for TaskTrackers 
which have been restarted.
 Key: MAPREDUCE-4227
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4227
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash


Whenever a TaskTracker is restarted after the JobTracker has been running for a 
while (an hour / day maybe), the TimeWindow statistics on the JobTracker Active 
nodes page are stuck at 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4197) Include the hsqldb jar in the hadoop-mapreduce tar file

2012-04-25 Thread Ravi Prakash (JIRA)
Ravi Prakash created MAPREDUCE-4197:
---

 Summary: Include the hsqldb jar in the hadoop-mapreduce tar file
 Key: MAPREDUCE-4197
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4197
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.23.1
Reporter: Ravi Prakash
Assignee: Ravi Prakash


Courtesy Brahma

{quote}
In the previuos hadoop releases(20.XX) hsqldb was provided.
But in hadoop-2.0.0 it is not present.Is it intentionally deleted or missing?
{quote}



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (MAPREDUCE-3983) TestTTResourceReporting can fail, and should just be deleted

2012-04-10 Thread Ravi Prakash (Reopened) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash reopened MAPREDUCE-3983:
-


Hi Bobby!

I'm sorry but the patch does not delete the file. Could you please svn rm the 
file 
./hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/mapred/TestTTResourceReporting.java
 ?

The test is failing right now with
{noformat}
java.lang.ClassNotFoundException: 
org.apache.hadoop.mapred.TestTTResourceReporting
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:169)
{noformat}

 TestTTResourceReporting can fail, and should just be deleted
 

 Key: MAPREDUCE-3983
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3983
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: mrv1
Affects Versions: 0.23.2
Reporter: Robert Joseph Evans
Assignee: Ravi Prakash
 Fix For: 0.23.3, 2.0.0

 Attachments: MAPREDUCE-3983.patch


 TestTTResourceReporting can fail.  It is an ant test for task trackers which 
 shoudl just be removed because task trackers are no longer supported outside 
 of the ant tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4134) Remove references of mapred.child.ulimit etc. since they are not being used any more

2012-04-10 Thread Ravi Prakash (Created) (JIRA)
Remove references of mapred.child.ulimit etc. since they are not being used any 
more


 Key: MAPREDUCE-4134
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4134
 Project: Hadoop Map/Reduce
  Issue Type: Task
  Components: mrv2
Affects Versions: 0.23.2
Reporter: Ravi Prakash
Assignee: Ravi Prakash


Courtesy Philip Su, we found that (mapred.child.ulimit, mapreduce.map.ulimit, 
mapreduce.reduce.ulimit) were not being used at all. The configuration exists 
but is never used. Its also mentioned in mapred-default.xml and 
templates/../mapred-site.xml . Also the method getUlimitMemoryCommand in 
Shell.java is now useless and can be removed.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4088) Task stuck in JobLocalizer prevented other tasks on the same node from committing

2012-03-30 Thread Ravi Prakash (Created) (JIRA)
Task stuck in JobLocalizer prevented other tasks on the same node from 
committing
-

 Key: MAPREDUCE-4088
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4088
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv1
Affects Versions: 0.20.205.0
Reporter: Ravi Prakash
Priority: Critical


We saw that as a result of HADOOP-6963, one task was stuck in this

Thread 23668: (state = IN_NATIVE)
 - java.io.UnixFileSystem.getBooleanAttributes0(java.io.File) @bci=0 (Compiled 
frame; information may be imprecise)
 - java.io.UnixFileSystem.getBooleanAttributes(java.io.File) @bci=2, line=228 
(Compiled frame)
 - java.io.File.exists() @bci=20, line=733 (Compiled frame)
 - org.apache.hadoop.fs.FileUtil.getDU(java.io.File) @bci=3, line=446 (Compiled 
frame)
 - org.apache.hadoop.fs.FileUtil.getDU(java.io.File) @bci=52, line=455 
(Compiled frame)
 - org.apache.hadoop.fs.FileUtil.getDU(java.io.File) @bci=52, line=455 
(Compiled frame)

 TONS MORE OF THIS SAME LINE
 - org.apache.hadoop.fs.FileUtil.getDU(java.io.File) @bci=52, line=455 
(Compiled frame)
.
.
 - org.apache.hadoop.fs.FileUtil.getDU(java.io.File) @bci=52, line=455 
(Compiled frame)
 - org.apache.hadoop.fs.FileUtil.getDU(java.io.File) @bci=52, line=455 
(Interpreted frame)
ne=451 (Interpreted frame)
 - 
org.apache.hadoop.mapred.JobLocalizer.downloadPrivateCacheObjects(org.apache.hadoop.conf.Configuration,
 java.net.URI[], org.apache.hadoop.fs.Path[], long[], boolean[], boolean) 
@bci=150, line=324 (Interpreted frame)
 - 
org.apache.hadoop.mapred.JobLocalizer.downloadPrivateCache(org.apache.hadoop.conf.Configuration)
 @bci=40, line=349 (Interpreted frame) 51, line=383 (Interpreted frame)
 - org.apache.hadoop.mapred.JobLocalizer.runSetup(java.lang.String, 
java.lang.String, org.apache.hadoop.fs.Path, 
org.apache.hadoop.mapred.TaskUmbilicalProtocol) @bci=46, line=477 (Interpreted 
frame)
 - org.apache.hadoop.mapred.JobLocalizer$3.run() @bci=20, line=534 (Interpreted 
frame)
 - org.apache.hadoop.mapred.JobLocalizer$3.run() @bci=1, line=531 (Interpreted 
frame)
 - 
java.security.AccessController.doPrivileged(java.security.PrivilegedExceptionAction,
 java.security.AccessControlContext) @bci=0 (Interpreted frame)
 - javax.security.auth.Subject.doAs(javax.security.auth.Subject, 
java.security.PrivilegedExceptionAction) @bci=42, line=396 (Interpreted frame)
 - 
org.apache.hadoop.security.UserGroupInformation.doAs(java.security.PrivilegedExceptionAction)
 @bci=14, line=1082 (Interpreted frame)
 - org.apache.hadoop.mapred.JobLocalizer.main(java.lang.String[]) @bci=266, 
line=530 (Interpreted frame)

While all other tasks on the same node were stuck in 
Thread 32141: (state = BLOCKED)
 - java.lang.Thread.sleep(long) @bci=0 (Interpreted frame)
 - 
org.apache.hadoop.mapred.Task.commit(org.apache.hadoop.mapred.TaskUmbilicalProtocol,
 org.apache.hadoop.mapred.Task$TaskReporter, 
org.apache.hadoop.mapreduce.OutputCommitter) @bci=24, line=980 (Compiled frame)
 - 
org.apache.hadoop.mapred.Task.done(org.apache.hadoop.mapred.TaskUmbilicalProtocol,
 org.apache.hadoop.mapred.Task$TaskReporter) @bci=146, line=871 (Interpreted 
frame)
 - org.apache.hadoop.mapred.ReduceTask.run(org.apache.hadoop.mapred.JobConf, 
org.apache.hadoop.mapred.TaskUmbilicalProtocol) @bci=470, line=423 (Interpreted 
frame)
 - org.apache.hadoop.mapred.Child$4.run() @bci=29, line=255 (Interpreted frame)
 - 
java.security.AccessController.doPrivileged(java.security.PrivilegedExceptionAction,
 java.security.AccessControlContext) @bci=0 (Interpreted frame)
 - javax.security.auth.Subject.doAs(javax.security.auth.Subject, 
java.security.PrivilegedExceptionAction) @bci=42, line=396 (Interpreted frame)
 - 
org.apache.hadoop.security.UserGroupInformation.doAs(java.security.PrivilegedExceptionAction)
 @bci=14, line=1082 (Interpreted frame)
 - org.apache.hadoop.mapred.Child.main(java.lang.String[]) @bci=738, line=249 
(Interpreted frame)

This should never happen. A stuck task should never prevent other tasks from 
different jobs on the same node from committing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4009) AM container log links need to be clicked twice to get to the actual log file

2012-03-14 Thread Ravi Prakash (Created) (JIRA)
AM container log links need to be clicked twice to get to the actual log file
-

 Key: MAPREDUCE-4009
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4009
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, webapps
Affects Versions: 0.23.2
Reporter: Ravi Prakash
Priority: Minor


On the RM page-click on an application-Click on the link for AM Container 
logs
This page contains links to stdout, stderr and syslog (i.e. 
hostname/node/containerlogs/container_1331751290995_0001_01_01/*stdout*/?start=-4096
 )

Clicking on any of them still shows the same page. NOW clicking on any of them 
will take you to the log. e.g. 
hostname/node/containerlogs/container_1331751290995_0001_01_01/*stdout/stdout*/?start=-4096

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3999) Tracking link gives an error if the AppMaster hasn't started yet

2012-03-12 Thread Ravi Prakash (Created) (JIRA)
Tracking link gives an error if the AppMaster hasn't started yet


 Key: MAPREDUCE-3999
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3999
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, webapps
Affects Versions: 0.23.1
Reporter: Ravi Prakash
Assignee: Ravi Prakash


Courtesy [~sseth]
{quote}
The MRAppMaster died before writing anything.

Steps to generate the error:
1. Setup a queue with 1 max active application per user
2. Submit a long running job to this queue.
3. Submit another job to the queue as the same user. Access the tracking URL
for job 2 directly or via Oozie (not via the RM link - which is rewritten once
the app starts).

This would exist in situations where the queue doesn't have enough capacity -
or for the small period of time between app submission and AM start.
{quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-3963) NodeManagers die on startup if they can't connect to the RM

2012-03-05 Thread Ravi Prakash (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-3963.
-

Resolution: Duplicate

Thanks Bhallamudi! Yes! This issue is a duplicate of MAPREDUCE-3676

 NodeManagers die on startup if they can't connect to the RM
 ---

 Key: MAPREDUCE-3963
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3963
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, nodemanager
Affects Versions: 0.23.1
Reporter: Ravi Prakash
Priority: Critical

 Steps to reproduce.
 Start the NM when the RM is down. The NM tries 10 times, then exits. It 
 should keep trying forever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3963) NodeManagers die on startup if they can't connect to the RM

2012-03-02 Thread Ravi Prakash (Created) (JIRA)
NodeManagers die on startup if they can't connect to the RM
---

 Key: MAPREDUCE-3963
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3963
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, nodemanager
Affects Versions: 0.23.1
Reporter: Ravi Prakash
Priority: Critical


Steps to reproduce.
Start the NM when the RM is down. The NM tries 10 times, then exits. It should 
keep trying forever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1688) A failing retry'able notification in JobEndNotifier can affect notifications of other jobs.

2012-02-21 Thread Ravi Prakash (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-1688.
-

   Resolution: Fixed
Fix Version/s: 0.23.1
 Assignee: Ravi Prakash

This has been fixed by MAPREDUCE-3028. Since each AM is now doing the job-end 
notification, it has automatically been parallelized. Also, timeouts have been 
set in the HttpURLConnection object.

 A failing retry'able notification in JobEndNotifier can affect notifications 
 of other jobs.
 ---

 Key: MAPREDUCE-1688
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1688
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.20.1
Reporter: Vinod Kumar Vavilapalli
Assignee: Ravi Prakash
 Fix For: 0.23.1


 The JobTracker puts all the notification commands into a delay-queue.  It has 
 a single thread that loops through this queue and sends out the 
 notifications.  When it hits failures with any notification which is 
 configured to be retired via {{job.end.retry.attempts}} and 
 {{job.end.retry.interval}}, the notification is queued back again. A single 
 notification with sufficiently large number of configured retries and which 
 consistently fails will affect other notifications in the queue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-3799) TestServiceLevelAuthorization testServiceLevelAuthorization failing

2012-02-13 Thread Ravi Prakash (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-3799.
-

   Resolution: Cannot Reproduce
Fix Version/s: 0.23.2
   0.23.1

Our jenkins job is not seeing this unit test fail again. Seems it has been 
fixed. If we see this again, I'll reopen this JIRA.

 TestServiceLevelAuthorization testServiceLevelAuthorization failing
 ---

 Key: MAPREDUCE-3799
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3799
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 0.23.0
Reporter: Ravi Prakash
 Fix For: 0.23.1, 0.23.2


 *Error Message*
 Expected file distcache not found
 *Stacktrace*
 junit.framework.AssertionFailedError: Expected file distcache not found
   at 
 org.apache.hadoop.mapred.TestMiniMRWithDFS.verifyContents(TestMiniMRWithDFS.java:200)
   at 
 org.apache.hadoop.mapred.TestMiniMRWithDFS.checkTaskDirectories(TestMiniMRWithDFS.java:149)
   at 
 org.apache.hadoop.mapred.TestMiniMRWithDFS.runWordCount(TestMiniMRWithDFS.java:240)
   at 
 org.apache.hadoop.security.authorize.TestServiceLevelAuthorization.testServiceLevelAuthorization(TestServiceLevelAuthorization.java:95)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-3800) TestHarFileSystem testRelativeArchives

2012-02-13 Thread Ravi Prakash (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-3800.
-

   Resolution: Cannot Reproduce
Fix Version/s: 0.23.2
   0.23.1

Our jenkins job is not seeing this unit test fail again. Seems it has been 
fixed. If we see this again, I'll reopen this JIRA.

 TestHarFileSystem testRelativeArchives
 --

 Key: MAPREDUCE-3800
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3800
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 0.23.0
Reporter: Ravi Prakash
  Labels: test
 Fix For: 0.23.1, 0.23.2


 *Error Message*
 failed test
 *Stacktrace*
 junit.framework.AssertionFailedError: failed test
   at 
 org.apache.hadoop.tools.TestHarFileSystem.testRelativeArchives(TestHarFileSystem.java:227)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3840) JobEndNotifier doesn't use the proxyToUse during connecting

2012-02-08 Thread Ravi Prakash (Created) (JIRA)
JobEndNotifier doesn't use the proxyToUse during connecting
---

 Key: MAPREDUCE-3840
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3840
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0, 0.24.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
Priority: Blocker


I stupidly removed the proxyToUse from openConnection() in MAPREDUCE-3649.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3798) TestJobCleanup testCustomCleanup is failing

2012-02-03 Thread Ravi Prakash (Created) (JIRA)
TestJobCleanup testCustomCleanup is failing
---

 Key: MAPREDUCE-3798
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3798
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 0.23.0
Reporter: Ravi Prakash


File 
somepath/hadoop-mapreduce-project/build/test/data/test-job-cleanup/output-8/_custom_cleanup
 missing for job job_20120203035807432_0009

junit.framework.AssertionFailedError: File 
somepath/hadoop-mapreduce-project/build/test/data/test-job-cleanup/output-8/_custom_cleanup
 missing for job job_20120203035807432_0009
at 
org.apache.hadoop.mapred.TestJobCleanup.testKilledJob(TestJobCleanup.java:228)
at 
org.apache.hadoop.mapred.TestJobCleanup.testCustomCleanup(TestJobCleanup.java:302)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3799) TestServiceLevelAuthorization testServiceLevelAuthorization failing

2012-02-03 Thread Ravi Prakash (Created) (JIRA)
TestServiceLevelAuthorization testServiceLevelAuthorization failing
---

 Key: MAPREDUCE-3799
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3799
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 0.23.0
Reporter: Ravi Prakash


*Error Message*

Expected file distcache not found

*Stacktrace*

junit.framework.AssertionFailedError: Expected file distcache not found
at 
org.apache.hadoop.mapred.TestMiniMRWithDFS.verifyContents(TestMiniMRWithDFS.java:200)
at 
org.apache.hadoop.mapred.TestMiniMRWithDFS.checkTaskDirectories(TestMiniMRWithDFS.java:149)
at 
org.apache.hadoop.mapred.TestMiniMRWithDFS.runWordCount(TestMiniMRWithDFS.java:240)
at 
org.apache.hadoop.security.authorize.TestServiceLevelAuthorization.testServiceLevelAuthorization(TestServiceLevelAuthorization.java:95)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3800) TestHarFileSystem testRelativeArchives

2012-02-03 Thread Ravi Prakash (Created) (JIRA)
TestHarFileSystem testRelativeArchives
--

 Key: MAPREDUCE-3800
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3800
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 0.23.0
Reporter: Ravi Prakash


*Error Message*
failed test
*Stacktrace*
junit.framework.AssertionFailedError: failed test
at 
org.apache.hadoop.tools.TestHarFileSystem.testRelativeArchives(TestHarFileSystem.java:227)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3712) The mapreduce tar does not contain the hadoop-mapreduce-client-jobclient-tests.jar.

2012-01-23 Thread Ravi Prakash (Created) (JIRA)
The mapreduce tar does not contain the 
hadoop-mapreduce-client-jobclient-tests.jar. 


 Key: MAPREDUCE-3712
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3712
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Priority: Blocker


Working MRv1 tests were moved into the maven build as part of MAPREDUCE-3582. 
Some classes like MRBench, SleepJob, FailJob which are essential for QE got 
moved to jobclient-tests.jar. However the tar.gz file does not contain this jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-3406) Add node information to bin/mapred job -list-attempt-ids and other improvements

2012-01-12 Thread Ravi Prakash (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-3406.
-

Resolution: Duplicate

Marking as Duplicate. I'd forgot I'd opened MAPREDUCE-3406

 Add node information to bin/mapred job -list-attempt-ids and other 
 improvements
 ---

 Key: MAPREDUCE-3406
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3406
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Fix For: 0.23.1


 From [~rramya]
 Providing the NM information where the containers are scheduled in bin/mapred 
 job -list-attempt-ids will be helpful in automation, debugging and to avoid 
 grepping through the AM logs.
 From my own observation, the list-attempt-ids should list the attempt ids and 
 not require the arguments. The arguments if given, can be used to filter the 
 results. From the usage:
 bq. [-list-attempt-ids job-id task-type task-state]. Valid values for 
 task-type are MAP REDUCE JOB_SETUP JOB_CLEANUP TASK_CLEANUP. Valid values 
 for task-state are running, completed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (MAPREDUCE-3406) Add node information to bin/mapred job -list-attempt-ids and other improvements

2012-01-12 Thread Ravi Prakash (Reopened) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash reopened MAPREDUCE-3406:
-


 Add node information to bin/mapred job -list-attempt-ids and other 
 improvements
 ---

 Key: MAPREDUCE-3406
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3406
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Fix For: 0.23.1


 From [~rramya]
 Providing the NM information where the containers are scheduled in bin/mapred 
 job -list-attempt-ids will be helpful in automation, debugging and to avoid 
 grepping through the AM logs.
 From my own observation, the list-attempt-ids should list the attempt ids and 
 not require the arguments. The arguments if given, can be used to filter the 
 results. From the usage:
 bq. [-list-attempt-ids job-id task-type task-state]. Valid values for 
 task-type are MAP REDUCE JOB_SETUP JOB_CLEANUP TASK_CLEANUP. Valid values 
 for task-state are running, completed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-3662) Command line ask: NM info where containers are launched

2012-01-12 Thread Ravi Prakash (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-3662.
-

Resolution: Duplicate

Marking as Duplicate. I'd forgot I'd opened MAPREDUCE-3406

 Command line ask: NM info where containers are launched
 ---

 Key: MAPREDUCE-3662
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3662
 Project: Hadoop Map/Reduce
  Issue Type: Wish
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash

 Courtesy [~rramya]
 {quote}
 we had requested for the NM information where the containers are scheduled to 
 be made available in job
 -list-attempt-ids. This will be helpful in automation, debugging and avoid 
 grepping through the AM logs.
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3662) Command line ask: NM info where containers are launched

2012-01-11 Thread Ravi Prakash (Created) (JIRA)
Command line ask: NM info where containers are launched
---

 Key: MAPREDUCE-3662
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3662
 Project: Hadoop Map/Reduce
  Issue Type: Wish
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash


Courtesy [~rramya]
{quote}
we had requested for the NM information where the containers are scheduled to 
be made available in job
-list-attempt-ids. This will be helpful in automation, debugging and avoid 
grepping through the AM logs.
{quote}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3620) Some times all the gridmix jobs are not processing.

2012-01-05 Thread Ravi Prakash (Created) (JIRA)
Some times all the gridmix jobs are not processing.
---

 Key: MAPREDUCE-3620
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3620
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
Priority: Critical


Courtesy [~vinaythota]
{quote}
Job trace contains 1205 jobs and Gridmix start processing 1200 jobs after 
processing. However, after completion of
gridmix run, execution summary details, it showed 1196 jobs are processed and 
remaining 4 jobs are missing. One log shows 1196 jobs processed and another
log shows 1200 jobs are processed.
{quote}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3614) finalState UNDEFINED if AM is killed by hand

2012-01-03 Thread Ravi Prakash (Created) (JIRA)
finalState UNDEFINED if AM is killed by hand


 Key: MAPREDUCE-3614
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3614
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash


Courtesy [~dcapwell]

{quote}
If the AM is running and you kill the process (sudo kill #pid), the State in 
Yarn would be FINISHED and FinalStatus is UNDEFINED.  The Tracking UI would say 
History and point to the proxy url (which will redirect to the history 
server).

The state should be more descriptive that the job failed and the tracker url 
shouldn't point to the history server.
{quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3596) Job got hang after completion of 99% map phase with hadoop-0.23.1.1112091615 RE build

2011-12-22 Thread Ravi Prakash (Created) (JIRA)
Job got hang after completion of 99% map phase with hadoop-0.23.1.1112091615 RE 
build
-

 Key: MAPREDUCE-3596
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3596
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: applicationmaster, mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Priority: Critical


Courtesy [~vinaythota]
{quote}
Ran sort benchmark couple of times and every time the job got hang after 
completion 99% map phase. There are some map tasks failed. Also it's not 
scheduled some of the pending map tasks.
Cluster size is 350 nodes.

Build Details:
==
Version:0.23.1.1112091615, 1212592
Compiled:   Fri Dec 9 16:25:27 PST 2011 by someone from 
branches/branch-0.23/hadoop-common-project/hadoop-common 

ResourceManager version:0.23.1.1112091615 from 1212681 by someone 
source checksum
6e54430abdc912c91c05b9208a3361de on Fri Dec 9 16:52:07 PST 2011
Hadoop version: 0.23.1.1112091615 from 1212592 by someone source 
checksum 999b78e0eadace831529ee78ed29c8e1 on
Fri Dec 9 16:25:27 PST 2011
{quote}




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




JAXB / Guice errors

2011-12-20 Thread Ravi Prakash
Hi,

Is anyone seeing these errors when they try to access the RM Web UI?

HTTP ERROR 500

Problem accessing /. Reason:

Guice provision errors:

1) Error injecting constructor, java.lang.LinkageError: JAXB 2.1 API is
being loaded from the bootstrap classloader, but this RI (from
jar:file:somePath/hadoop-0.23.1-SNAPSHOT/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar!/com/sun/xml/bind/v2/model/impl/ModelBuilder.class)
needs 2.2 API. Use the endorsed directory mechanism to place jaxb-api.jar
in the bootstrap classloader. (See
http://java.sun.com/j2se/1.6.0/docs/guide/standards/)
  at
org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver.init(JAXBContextResolver.java:60)
  at
org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebApp.setup(RMWebApp.java:45)
  while locating
org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver

1 error

Anyone fix it yet?

Cheers
Ravi.


Re: JAXB / Guice errors

2011-12-20 Thread Ravi Prakash
Hi Vinod,

I solved my issue. I had a stale version of java pointed to by JAVA_HOME.
$ ./jdk1.6.0_01/bin/java -version
java version 1.6.0_01
Java(TM) SE Runtime Environment (build 1.6.0_01-b06)
Java HotSpot(TM) Server VM (build 1.6.0_01-b06, mixed mode)

Updating to $ ./jdk1.6.0_30/bin/java -version
java version 1.6.0_30
Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
Java HotSpot(TM) Server VM (build 20.5-b03, mixed mode)

fixed the problem

Thanks
Ravi


On Tue, Dec 20, 2011 at 12:13 PM, Vinod Kumar Vavilapalli 
vino...@hortonworks.com wrote:

 Can you please open a ticket? It must be related to MAPREDUCE-2863 . Thomas
 can help with this.

 Thanks,
 +Vinod


 On Tue, Dec 20, 2011 at 10:09 AM, Ravi Prakash ravihad...@gmail.com
 wrote:

  Hi,
 
  Is anyone seeing these errors when they try to access the RM Web UI?
 
  HTTP ERROR 500
 
  Problem accessing /. Reason:
 
 Guice provision errors:
 
  1) Error injecting constructor, java.lang.LinkageError: JAXB 2.1 API is
  being loaded from the bootstrap classloader, but this RI (from
 
 
 jar:file:somePath/hadoop-0.23.1-SNAPSHOT/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar!/com/sun/xml/bind/v2/model/impl/ModelBuilder.class)
  needs 2.2 API. Use the endorsed directory mechanism to place jaxb-api.jar
  in the bootstrap classloader. (See
  http://java.sun.com/j2se/1.6.0/docs/guide/standards/)
   at
 
 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver.init(JAXBContextResolver.java:60)
   at
 
 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebApp.setup(RMWebApp.java:45)
   while locating
  org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver
 
  1 error
 
  Anyone fix it yet?
 
  Cheers
  Ravi.
 



[jira] [Created] (MAPREDUCE-3541) Fix broken TestJobQueueClient test

2011-12-13 Thread Ravi Prakash (Created) (JIRA)
Fix broken TestJobQueueClient test
--

 Key: MAPREDUCE-3541
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3541
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.1
Reporter: Ravi Prakash
Assignee: Ravi Prakash
Priority: Critical


Ant build complains 
[javac] 
/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/mapred/TestJobQueueClient.java:80:
 
printJobQueueInfo(org.apache.hadoop.mapred.JobQueueInfo,java.io.Writer,java.lang.String)
 in org.apache.hadoop.mapred.JobQueueClient cannot be applied to 
(org.apache.hadoop.mapred.JobQueueInfo,java.io.StringWriter)
[javac] client.printJobQueueInfo(root, writer);


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3482) Enable HTTP proxy to be specified for job end notification

2011-11-29 Thread Ravi Prakash (Created) (JIRA)
Enable HTTP proxy to be specified for job end notification
--

 Key: MAPREDUCE-3482
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3482
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: applicationmaster, mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash


Courtesy Ratandeep Singh Ratti
{quote}
The AM should be able to notify the job.end.notification.url. Hence this 
request has to go through HTTP proxy, since ACLs won't be open from the AM to 
external machines. 
{quote}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-3482) Enable HTTP proxy to be specified for job end notification

2011-11-29 Thread Ravi Prakash (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-3482.
-

Resolution: Duplicate

This is a duplicate of MAPREDUCE-3382

 Enable HTTP proxy to be specified for job end notification
 --

 Key: MAPREDUCE-3482
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3482
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: applicationmaster, mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Anupam Seth

 Courtesy Ratandeep Singh Ratti
 {quote}
 The AM should be able to notify the job.end.notification.url. Hence this 
 request has to go through HTTP proxy, since ACLs won't be open from the AM to 
 external machines. 
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3484) Job end notification method should be called before stop() in handle(JobFinishEvent)

2011-11-29 Thread Ravi Prakash (Created) (JIRA)
Job end notification method should be called before stop() in 
handle(JobFinishEvent)


 Key: MAPREDUCE-3484
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3484
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash


We noticed JobEndNotifier was getting an InterruptedException before completing 
all its retries.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3476) Optimize YARN API calls

2011-11-28 Thread Ravi Prakash (Created) (JIRA)
Optimize YARN API calls
---

 Key: MAPREDUCE-3476
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3476
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
Priority: Critical


Several YARN API calls are taking inordinately long. This might be a 
performance blocker.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3406) Add node information to bin/mapred job -list-attempt-ids and other improvements

2011-11-15 Thread Ravi Prakash (Created) (JIRA)
Add node information to bin/mapred job -list-attempt-ids and other improvements
---

 Key: MAPREDUCE-3406
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3406
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Fix For: 0.23.1


From [~rramya]
Providing the NM information where the containers are scheduled in bin/mapred 
job -list-attempt-ids will be helpful in automation, debugging and to avoid 
grepping through the AM logs.

From my own observation, the list-attempt-ids should list the attempt ids and 
not require the arguments. The arguments if given, can be used to filter the 
results. From the usage:
bq. [-list-attempt-ids job-id task-type task-state]. Valid values for 
task-type are MAP REDUCE JOB_SETUP JOB_CLEANUP TASK_CLEANUP. Valid values for 
task-state are running, completed



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3304) TestRMContainerAllocator#testBlackListedNodes fails intermittently

2011-10-28 Thread Ravi Prakash (Created) (JIRA)
TestRMContainerAllocator#testBlackListedNodes fails intermittently
--

 Key: MAPREDUCE-3304
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3304
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, test
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Fix For: 0.23.0


Thanks to Hitesh for verifying!

bq. The heartbeat event should be drained before the schedule call.
bq. -- Hitesh

I can see this test fail intermittently on my Mac OSX 10.5 and Fedora 14 
machines. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-3160) Merge -r 1177530:1177531 from trunk to branch-0.23 to fix MAPREDUCE-2996 broke ant test compilation

2011-10-14 Thread Ravi Prakash (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-3160.
-

Resolution: Invalid

This was a problem with my ivy cache. After clearing it, and rebuilding 
everything, this problem went away.

 Merge -r 1177530:1177531 from trunk to branch-0.23 to fix MAPREDUCE-2996 
 broke ant test compilation
 ---

 Key: MAPREDUCE-3160
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3160
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Ravi Prakash

 I git bisected and the problem starts from 
 adb810babaf25b9f9dae75b43d4beac782deaa01 . ant
 {noformat}
 [jsp-compile] log4j:WARN See 
 http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
 [javac] 
 /home/raviprak/Code/hadoop/hadoop-all/hadoop-mapreduce-project/build.xml:398: 
 warning: 'includeantruntime' was not set, defaulting to 
 build.sysclasspath=last; set to false for repeatable builds
 [javac] Compiling 2 source files to 
 /home/raviprak/Code/hadoop/hadoop-all/hadoop-mapreduce-project/build/classes
 [javac] 
 /home/raviprak/Code/hadoop/hadoop-all/hadoop-mapreduce-project/src/java/org/apache/hadoop/mapred/JobInProgress.java:697:
  cannot find symbol
 [javac] symbol  : constructor 
 JobInitedEvent(org.apache.hadoop.mapred.JobID,long,int,int,java.lang.String,boolean)
 [javac] location: class 
 org.apache.hadoop.mapreduce.jobhistory.JobInitedEvent
 [javac] JobInitedEvent jie = new JobInitedEvent(
 [javac]  ^
 [javac] Note: 
 /home/raviprak/Code/hadoop/hadoop-all/hadoop-mapreduce-project/src/java/org/apache/hadoop/mapred/JobInProgress.java
  uses or overrides a deprecated API.
 [javac] Note: Recompile with -Xlint:deprecation for details.
 [javac] 1 error
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Tests timing out!

2011-10-12 Thread Ravi Prakash
Hi folks,

Mapreduce v1 tests are timing out / failing on branch-0.23. Does anyone have
information about this?

Here's how I'm able to consistently reproduce this:
1. rm -rf ~/.ivy2 ~/.m2 # Clean up repository caches
2. cd branch-0.23
3. mvn -Pdist -DskipTests -P-cbuild install
4. cd hadoop-mapreduce-project
5. mvn -P-cbuild -DskipTests install assembly:assembly
6. ant -Dtestcase=TestFileSystem test

You can directly try Step 6 too.

Any information would be very appreciated.

Thanks
Ravi.


Re: Tests timing out!

2011-10-12 Thread Ravi Prakash
Hi folks,

I git bisected and the problem starts from
147e2cf81f97580c011d3667aa6444a970b44baa.

I'll follow up with the folks / file a jira.

Thanks
Ravi



On Wed, Oct 12, 2011 at 9:50 AM, Ravi Prakash ravihad...@gmail.com wrote:

 Hi folks,

 Mapreduce v1 tests are timing out / failing on branch-0.23. Does anyone
 have information about this?

 Here's how I'm able to consistently reproduce this:
 1. rm -rf ~/.ivy2 ~/.m2 # Clean up repository caches
 2. cd branch-0.23
 3. mvn -Pdist -DskipTests -P-cbuild install
 4. cd hadoop-mapreduce-project
 5. mvn -P-cbuild -DskipTests install assembly:assembly
 6. ant -Dtestcase=TestFileSystem test

 You can directly try Step 6 too.

 Any information would be very appreciated.

 Thanks
 Ravi.




[jira] [Created] (MAPREDUCE-3176) ant mapreduce tests are timing out

2011-10-12 Thread Ravi Prakash (Created) (JIRA)
ant mapreduce tests are timing out
--

 Key: MAPREDUCE-3176
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3176
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Hitesh Shah
Priority: Blocker
 Fix For: 0.23.0


Secondary YARN builds started taking inordinately long and lots of tests 
started failing. Usually the secondary build would take ~ 2 hours. But recently 
even after 7 hours it wasn't done. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3147) Handle leaf queues with the same name properly

2011-10-05 Thread Ravi Prakash (Created) (JIRA)
Handle leaf queues with the same name properly
--

 Key: MAPREDUCE-3147
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3147
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Fix For: 0.23.0


If there are two leaf queues with the same name, there is ambiguity while 
submitting jobs, displaying queue info. When such ambiguity exists, the system 
should ask for clarification / show disambiguated information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3087) CLASSPATH not the same after MAPREDUCE-2880

2011-09-25 Thread Ravi Prakash (JIRA)
CLASSPATH not the same after MAPREDUCE-2880
---

 Key: MAPREDUCE-3087
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3087
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Ravi Prakash


After MAPREDUCE-2880, my classpath was missing key jar files. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Jobs not running after MAPREDUCE-2880

2011-09-23 Thread Ravi Prakash
Hi Arun/Vinod,

After commit d4dca4eabf83a97d158f1e1caa4801020679d5e2
Date:   Wed Sep 21 18:52:27 2011 +
MAPREDUCE-2880. svn merge -c r1173783 --ignore-ancestry ../../trunk/
git-svn-id:
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.23@117379213f79535-47bb-0310-9956-ffa450edef68

My mapreduce jobs are failing
2011-09-23 10:27:08,534 INFO  ipc.HadoopYarnRPC
(HadoopYarnProtoRPC.java:getProxy(49)) - Creating a HadoopYarnProtoRpc proxy
for protocol interface org.apache.hadoop.mapreduce.v2.api.MRClientProtocol
2011-09-23 10:27:08,634 INFO  mapreduce.Job
(Job.java:monitorAndPrintJob(1209)) - Running job: job_1316791524705_0002
2011-09-23 10:27:09,653 INFO  mapreduce.Job
(Job.java:monitorAndPrintJob(1229)) -  map 0% reduce 0%
2011-09-23 10:27:16,739 INFO  mapreduce.Job
(Job.java:monitorAndPrintJob(1242)) - Job job_1316791524705_0002 failed with
state FAILED
2011-09-23 10:27:16,786 INFO  mapreduce.Job
(Job.java:monitorAndPrintJob(1246)) - Counters: 0

Digging into the stderr logs: I see this single line
Exception in thread main java.lang.NoClassDefFoundError:
org/apache/hadoop/mapreduce/v2/app/MRAppMaster

What do I need to add to my environment / config so that the magic happens
again?

Thanks
Ravi.


Re: Jobs not running after MAPREDUCE-2880

2011-09-23 Thread Ravi Prakash
Hi Arun,

Unsecure single node.

I'm attaching the classpath I grepped | sort | uniq from the two task.sh
files I got (one from the working version and the other from the notworking
version). Looks like the classpath which worked had some other jars not
present in the new classpath.

I'm guessing as part of the simplification for CLASSPATH maybe we missed
something that was being included earlier?

Thanks
Ravi


On Fri, Sep 23, 2011 at 12:18 PM, Arun Murthy a...@hortonworks.com wrote:

 This is secure mode or unsecured? Cluster or single node? Tx

 Sent from my iPhone

 On Sep 23, 2011, at 8:37 AM, Ravi Prakash ravihad...@gmail.com wrote:

  Hi Arun/Vinod,
 
  After commit d4dca4eabf83a97d158f1e1caa4801020679d5e2
  Date:   Wed Sep 21 18:52:27 2011 +
  MAPREDUCE-2880. svn merge -c r1173783 --ignore-ancestry ../../trunk/
  git-svn-id:
 
 https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.23@117379213f79535-47bb-0310-9956-ffa450edef68
 
  My mapreduce jobs are failing
  2011-09-23 10:27:08,534 INFO  ipc.HadoopYarnRPC
  (HadoopYarnProtoRPC.java:getProxy(49)) - Creating a HadoopYarnProtoRpc
 proxy
  for protocol interface
 org.apache.hadoop.mapreduce.v2.api.MRClientProtocol
  2011-09-23 10:27:08,634 INFO  mapreduce.Job
  (Job.java:monitorAndPrintJob(1209)) - Running job: job_1316791524705_0002
  2011-09-23 10:27:09,653 INFO  mapreduce.Job
  (Job.java:monitorAndPrintJob(1229)) -  map 0% reduce 0%
  2011-09-23 10:27:16,739 INFO  mapreduce.Job
  (Job.java:monitorAndPrintJob(1242)) - Job job_1316791524705_0002 failed
 with
  state FAILED
  2011-09-23 10:27:16,786 INFO  mapreduce.Job
  (Job.java:monitorAndPrintJob(1246)) - Counters: 0
 
  Digging into the stderr logs: I see this single line
  Exception in thread main java.lang.NoClassDefFoundError:
  org/apache/hadoop/mapreduce/v2/app/MRAppMaster
 
  What do I need to add to my environment / config so that the magic
 happens
  again?
 
  Thanks
  Ravi.



[jira] [Created] (MAPREDUCE-3069) running test-patch gives me 10 warnings for missing 'build.plugins.plugin.version' of org.apache.rat:apache-rat-plugin

2011-09-22 Thread Ravi Prakash (JIRA)
running test-patch gives me 10 warnings for missing 
'build.plugins.plugin.version' of org.apache.rat:apache-rat-plugin
--

 Key: MAPREDUCE-3069
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3069
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Fix For: 0.23.0


apache-rat-plugin doesn't have a version specified in hadoop-mapreduce-project 
and hadoop-yarn pom.xml files





--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-2790) [MR-279] Add additional field for storing the AM/job history info on CLI

2011-09-22 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved MAPREDUCE-2790.
-

Resolution: Duplicate

 [MR-279] Add additional field for storing the AM/job history info on CLI
 

 Key: MAPREDUCE-2790
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2790
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Ramya Sunil
Assignee: Ravi Prakash
Priority: Critical
 Fix For: 0.23.0

 Attachments: MAPREDUCE-2790.v1.txt, MAPREDUCE-2790.v2.txt, 
 MAPREDUCE-2790.v3.txt, MAPREDUCE-2790.v4.txt


 bin/mapred job [-list [all]] displays the AM or job history location in the 
 SchedulingInfo field. An additional column has to be added to display the 
 AM/job history information. Currently, the output reads:
 {noformat}
 JobId   State   StartTime   UserNameQueue   Priority
 SchedulingInfo
 jobID  FAILED   0   ramya   default NORMAL  AM 
 information/job history location
 {noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3023) Queue state is not being translated properly (is always assumed to be running)

2011-09-16 Thread Ravi Prakash (JIRA)
Queue state is not being translated properly (is always assumed to be running)
--

 Key: MAPREDUCE-3023
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3023
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Fix For: 0.23.0


During translation of QueueInfo, 

bq. TypeConverter.java:435 : queueInfo.toString(), QueueState.RUNNING,
ought to be 
bq. queueInfo.toString(), 
QueueState.getState(queueInfo.getQueueState().toString().toLowerCase()),

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3010) ant mvn-install doesn't work on hadoop-mapreduce-project

2011-09-14 Thread Ravi Prakash (JIRA)
ant mvn-install doesn't work on hadoop-mapreduce-project


 Key: MAPREDUCE-3010
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3010
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Ravi Prakash


Even though ant jar works, ant mvn-install fails in the compile-fault-inject 
step

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Trunk and 0.23 build failing with clean .m2 directory

2011-08-29 Thread Ravi Prakash
Yeah I've seen this before. Sometimes I had to descend into child
directories to mvn install them, before I could maven install parents. I'm
hoping/guessing that issue is fixed now

On Mon, Aug 29, 2011 at 11:39 AM, Robert Evans ev...@yahoo-inc.com wrote:

 Wow this is odd install works just fine, but compile fails unless I do an
 install first (I found this trying to run test-patch).

 $mvn --version
 Apache Maven 3.0.3 (r1075438; 2011-02-28 11:31:09-0600)
 Maven home: /home/evans/bin/maven
 Java version: 1.6.0_22, vendor: Sun Microsystems Inc.
 Java home: /home/evans/bin/jdk1.6.0/jre
 Default locale: en_US, platform encoding: UTF-8
 OS name: linux, version: 2.6.18-238.12.1.el5, arch: i386, family:
 unix

 Has anyone else seen this, or is there something messed up with my machine?

 Thanks,

 Bobby

 On 8/29/11 11:18 AM, Robert Evans ev...@yahoo-inc.com wrote:

 I am getting the following errors when I try to build either trunk or 0.23
 with a clean maven cache.  I don't get any errors if I use my old cache.

 [INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @
 hadoop-yarn-common ---
 [INFO] Compiling 2 source files to

 /home/evans/src/hadoop-git/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-
 common/target/classes
 [INFO]
 [INFO]
 
 [INFO] Building hadoop-yarn-server-common 0.24.0-SNAPSHOT
 [INFO]
 
 [INFO]
 
 [INFO] Reactor Summary:
 [INFO]
 [INFO] Apache Hadoop Project POM . SUCCESS [0.714s]
 [INFO] Apache Hadoop Annotations . SUCCESS [0.323s]
 [INFO] Apache Hadoop Project Dist POM  SUCCESS [0.001s]
 [INFO] Apache Hadoop Assemblies .. SUCCESS [0.025s]
 [INFO] Apache Hadoop Alfredo . SUCCESS [0.067s]
 [INFO] Apache Hadoop Common .. SUCCESS [2.117s]
 [INFO] Apache Hadoop Common Project .. SUCCESS [0.001s]
 [INFO] Apache Hadoop HDFS  SUCCESS [1.419s]
 [INFO] Apache Hadoop HDFS Project  SUCCESS [0.001s]
 [INFO] hadoop-yarn-api ... SUCCESS [7.019s]
 [INFO] hadoop-yarn-common  SUCCESS [2.181s]
 [INFO] hadoop-yarn-server-common . FAILURE [0.058s]
 [INFO] hadoop-yarn-server-nodemanager  SKIPPED
 [INFO] hadoop-yarn-server-resourcemanager  SKIPPED
 [INFO] hadoop-yarn-server-tests .. SKIPPED
 [INFO] hadoop-yarn-server  SKIPPED
 [INFO] hadoop-yarn ... SKIPPED
 [INFO] hadoop-mapreduce-client-core .. SKIPPED
 [INFO] hadoop-mapreduce-client-common  SKIPPED
 [INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
 [INFO] hadoop-mapreduce-client-app ... SKIPPED
 [INFO] hadoop-mapreduce-client-hs  SKIPPED
 [INFO] hadoop-mapreduce-client-jobclient . SKIPPED
 [INFO] hadoop-mapreduce-client ... SKIPPED
 [INFO] hadoop-mapreduce .. SKIPPED
 [INFO] Apache Hadoop Main  SKIPPED
 [INFO]
 
 [INFO] BUILD FAILURE
 [INFO]
 
 [INFO] Total time: 14.938s
 [INFO] Finished at: Mon Aug 29 11:18:06 CDT 2011
 [INFO] Final Memory: 29M/207M
 [INFO]
 
 [ERROR] Failed to execute goal on project hadoop-yarn-server-common: Could
 not resolve dependencies for project
 org.apache.hadoop:hadoop-yarn-server-common:jar:0.24.0-SNAPSHOT: Failure to
 find org.apache.hadoop:hadoop-yarn-common:jar:tests:0.24.0-SNAPSHOT in
 http://ymaven.corp.yahoo.com:/proximity/repository/apache.snapshot was
 cached in the local repository, resolution will not be reattempted until
 the
 update interval of local apache.snapshot mirror has elapsed or updates are
 forced - [Help 1]
 [ERROR]
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR]
 [ERROR] For more information about the errors and possible solutions,
 please
 read the following articles:
 [ERROR] [Help 1]

 http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionExcepti
 on
 [ERROR]
 [ERROR] After correcting the problems, you can resume the build with the
 command
 [ERROR]   mvn goals -rf :hadoop-yarn-server-common


 Is anyone looking into this yet?

 --Bobby





[jira] [Created] (MAPREDUCE-2907) ResourceManager logs filled with [INFO] debug messages from org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue

2011-08-29 Thread Ravi Prakash (JIRA)
ResourceManager logs filled with [INFO] debug messages from 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue


 Key: MAPREDUCE-2907
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2907
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 0.23.0
Reporter: Ravi Prakash
 Fix For: 0.23.0


I see a lot of info messages (probably used for debugging during development)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (MAPREDUCE-2550) bin/mapred no longer works from a source checkout

2011-08-26 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash reopened MAPREDUCE-2550:
-


Commit 103374ed71a22d614e77e79ac816fc72fbf93463 (Revision 161793) on branch-0.23
mapred-config.sh searches for hadoop-config.sh in bin. Whereas after the recent 
mavenization and restructuring, if I had done 
{noformat} $ mvn -Pdist install {noformat} in the top level directory then, 
hadoop-config.sh is put in libexec.

Can we please include this case too?

 bin/mapred no longer works from a source checkout
 -

 Key: MAPREDUCE-2550
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2550
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: build
Affects Versions: 0.20.3
 Environment: Java 6, Redhat 5.5
Reporter: Eric Yang
Assignee: Eric Yang
Priority: Blocker
 Fix For: 0.23.0

 Attachments: MAPREDUCE-2550-1.patch, MAPREDUCE-2550-2.patch, 
 MAPREDUCE-2550.patch


 Developer may want to run hadoop without extracting tarball.  It would be 
 nice if existing method to run mapred scripts from source code is preserved 
 for developers.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




How to run yarn?

2011-08-25 Thread Ravi Prakash
Hi,

https://issues.apache.org/jira/browse/HADOOP-7563 (or whatever other jira
was responsible) was so awesome that I was able to export
HADOOP_COMMON_HOME, HADOOP_HDFS_HOME and run
target/hadoop-common-0.23.0-SNAPSHOT/sbin/hadoop-daemon.sh start
namenode|datanode|secondarynamenode without any hassles. Great work folks!
Kudos! It is really incredible.

Is there a plan for getting the same awesomeness for YARN? What is the plan
for starting the YARN-daemons from source?

Could someone (Alejandro/Eric/?) please point me to one comment / JIRA out
of the millions for mavenization floating around to start YARN-daemons from
the built sources?

Thanks
Ravi.


Re: Problem building yarn in trunk, which was working earlier.

2011-08-23 Thread Ravi Prakash
I too saw this in 911cd2546c882c8e6d87b17b068af3af53a933c6 (1160424).
However when I git pulled it to 37028a826dbefad9b82b8ec0954c5e18e6d77e22
(1160521), the problem went away.

On Mon, Aug 22, 2011 at 10:58 AM, Ravi Teja ravit...@huawei.com wrote:

 Hi,



 Everything was working fine till recently, where the dependency resolution
 itself is failing.

 Are there any changes done, I have executed the target which was working
 earlier, which is mvn install assembly:assembly

 Thanks in advance.



 [INFO]
 
 [INFO] Building hadoop-yarn-api 1.0-SNAPSHOT
 [INFO]
 
 [INFO]
 
 [INFO] Reactor Summary:
 [INFO]
 [INFO] hadoop-yarn-api ... FAILURE [0.297s]
 [INFO] hadoop-yarn-common  SKIPPED
 [INFO] hadoop-yarn-server-common . SKIPPED
 [INFO] hadoop-yarn-server-nodemanager  SKIPPED
 [INFO] hadoop-yarn-server-resourcemanager  SKIPPED
 [INFO] hadoop-yarn-server-tests .. SKIPPED
 [INFO] hadoop-yarn-server  SKIPPED
 [INFO] hadoop-yarn ... SKIPPED
 [INFO] hadoop-mapreduce-client-core .. SKIPPED
 [INFO] hadoop-mapreduce-client-common  SKIPPED
 [INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
 [INFO] hadoop-mapreduce-client-app ... SKIPPED
 [INFO] hadoop-mapreduce-client-hs  SKIPPED
 [INFO] hadoop-mapreduce-client-jobclient . SKIPPED
 [INFO] hadoop-mapreduce-client ... SKIPPED
 [INFO] hadoop-mapreduce .. SKIPPED
 [INFO]
 
 [INFO] BUILD FAILURE
 [INFO]
 
 [INFO] Total time: 0.672s
 [INFO] Finished at: Mon Aug 22 21:15:00 IST 2011
 [INFO] Final Memory: 4M/8M
 [INFO]
 
 [ERROR] Failed to execute goal on project hadoop-yarn-api: Could not
 resolve
 dep
 endencies for project org.apache.hadoop:hadoop-yarn-api:jar:1.0-SNAPSHOT:
 Failed
  to collect dependencies for [org.apache.avro:avro:jar:1.4.1 (compile),
 com.goog
 le.protobuf:protobuf-java:jar:2.4.0a (compile),
 org.apache.hadoop:hadoop-common:
 jar:0.23.0-SNAPSHOT (compile),
 org.apache.hadoop:hadoop-annotations:jar:0.23.0-S
 NAPSHOT (compile), junit:junit:jar:4.8.2 (compile),
 org.mockito:mockito-all:jar:
 1.8.5 (test), org.apache.hadoop:hadoop-common:jar:tests:0.23.0-SNAPSHOT
 (test),
 org.apache.hadoop:hadoop-hdfs:jar:0.23.0-SNAPSHOT (runtime),
 com.google.inject.e
 xtensions:guice-servlet:jar:2.0 (compile),
 org.jboss.netty:netty:jar:3.2.3.Final
  (compile), org.slf4j:slf4j-api:jar:1.6.1 (compile),
 org.slf4j:slf4j-log4j12:jar
 :1.6.1 (compile)]: Failed to read artifact descriptor for
 org.apache.hadoop:hado
 op-common:jar:0.23.0-SNAPSHOT: Failure to find
 org.apache.hadoop:hadoop-project-
 distro:pom:0.23.0-SNAPSHOT in  http://repository.apache.org/snapshots
 http://repository.apache.org/snapshots was cached
 in the local repository, resolution will not be reattempted until the
 update
 int
 erval of apache.snapshots has elapsed or updates are forced - [Help 1]



 Regards,

 Ravi Teja






mapreduce examples jar not being built

2011-08-22 Thread Ravi Prakash
Good folks of the Shire,

Doest anyone know the magic wanted for building the mapreduce examples jar
in the new mavenized Middle earth?

Cheers
Ravi