Re: Branching 2.5

2014-07-29 Thread Andrew Wang
I looked in the log, it also looks like findbugs is OOMing:

 [java] Exception in thread "main" java.lang.OutOfMemoryError: GC
overhead limit exceeded
 [java] at edu.umd.cs.findbugs.ba.Path.grow(Path.java:263)
 [java] at edu.umd.cs.findbugs.ba.Path.copyFrom(Path.java:113)
 [java] at edu.umd.cs.findbugs.ba.Path.duplicate(Path.java:103)
 [java] at edu.umd.cs.findbugs.ba.obl.State.duplicate(State.java:65)


This is quite possibly related, since there's an error at the end like this:

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project
hadoop-hdfs: An Ant BuildException has occured: input file
/home/jenkins/jenkins-slave/workspace/HADOOP2_Release_Artifacts_Builder/branch-2.5.0/hadoop-hdfs-project/hadoop-hdfs/target/findbugsXml.xml
does not exist

[ERROR] around Ant part ..
@ 44:368 in
/home/jenkins/jenkins-slave/workspace/HADOOP2_Release_Artifacts_Builder/branch-2.5.0/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml

I'll try to figure out how to increase this, but if anyone else knows, feel
free to chime in.


On Tue, Jul 29, 2014 at 5:41 PM, Karthik Kambatla 
wrote:

> Devs,
>
> I created branch-2.5.0 and was trying to cut an RC, but ran into issues
> with creating one. If anyone knows what is going on, please help me out. I
> ll continue looking into it otherwise.
>
> https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/24/console
> is the build that failed. It appears the issue is because it can't find
> Null.java. I run into the same issue locally as well, even with
> branch-2.4.1. So, I wonder if I should be doing anything else to create the
> RC instead?
>
> Thanks
> Karthik
>
>
> On Sun, Jul 27, 2014 at 11:09 AM, Zhijie Shen 
> wrote:
>
> > I've just committed YARN-2247, which is the last 2.5 blocker from YARN.
> >
> >
> > On Sat, Jul 26, 2014 at 5:02 AM, Karthik Kambatla 
> > wrote:
> >
> > > A quick update:
> > >
> > > All remaining blockers are on the verge of getting committed. Once that
> > is
> > > done, I plan to cut a branch for 2.5.0 and get an RC out hopefully this
> > > coming Monday.
> > >
> > >
> > > On Fri, Jul 25, 2014 at 12:32 PM, Andrew Wang <
> andrew.w...@cloudera.com>
> > > wrote:
> > >
> > > > One thing I forgot, the release note activities are happening at
> > > > HADOOP-10821. If you have other things you'd like to see mentioned,
> > feel
> > > > free to leave a comment on the JIRA and I'll try to include it.
> > > >
> > > > Thanks,
> > > > Andrew
> > > >
> > > >
> > > > On Fri, Jul 25, 2014 at 12:28 PM, Andrew Wang <
> > andrew.w...@cloudera.com>
> > > > wrote:
> > > >
> > > > > I just went through and fixed up the HDFS and Common CHANGES.txt
> for
> > > > 2.5.0.
> > > > >
> > > > > As a friendly reminder, please try to put things under the correct
> > > > section
> > > > > :) We have subsections for the xattr changes in HDFS-2006 and
> > > > HADOOP-10514,
> > > > > and there were some unrelated JIRAs appended to the end.
> > > > >
> > > > > I'd also encourage committers to be more liberal with their use of
> > the
> > > > NEW
> > > > > FEATURES section. I'm helping Karthik write up the 2.5 release
> notes,
> > > and
> > > > > I'm using NEW FEATURES to fill it out. When looking through the
> JIRA
> > > list
> > > > > though, I decided to promote things like the SNN/DN/JN webUI
> > > > improvements,
> > > > > the HCFS specification work, and OIV read-only WebHDFS access to
> new
> > > > > features. One rule-of-thumb, if a feature required an umbrella
> JIRA,
> > > put
> > > > > the umbrella under NEW FEATURES when it's resolved.
> > > > >
> > > > > Thanks,
> > > > > Andrew
> > > > >
> > > > >
> > > > > On Wed, Jul 16, 2014 at 7:59 PM, Wangda Tan 
> > > wrote:
> > > > >
> > > > >> Thanks Tsuyoshi for pointing me this,
> > > > >>
> > > > >> Wangda
> > > > >>
> > > > >>
> > > > >> On Thu, Jul 17, 2014 at 10:30 AM, Tsuyoshi OZAWA <
> > > > >> ozawa.tsuyo...@gmail.com>
> > > > >> wrote:
> > > > >>
> > > > >> > Hi Wangda,
> > > > >> >
> > > > >> > The following link is same link as Karthik mentioned:
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >>
> > > >
> > >
> >
> https://issues.apache.org/jira/browse/YARN-2247?jql=project%20in%20(Hadoop%2C%20HDFS%2C%20YARN%2C%20%22Hadoop%20Map%2FReduce%22)%20AND%20resolution%20%3D%20Unresolved%20AND%20%22Target%20Version%2Fs%22%20%3D%202.5.0%20AND%20priority%20in%20(Blocker)
> > > > >> >
> > > > >> > Or, please access to http://goo.gl/FX3iWp
> > > > >> >
> > > > >> > Thanks,
> > > > >> > - Tsuyoshi
> > > > >> >
> > > > >> > On Thu, Jul 17, 2014 at 10:55 AM, Zhijie Shen <
> > > zs...@hortonworks.com>
> > > > >> > wrote:
> > > > >> > > I raised YARN-2247 as the blocker of 2.5.0.
> > > > >> > >
> > > > >> > >
> > > > >> > > On Thu, Jul 17, 2014 at 9:42 AM, Wangda Tan <
> > wheele...@gmail.com>
> > > > >> wrote:
> > > > >> > >
> > > > >> > >> Hi Karthik,
> > > > >> > >> I found I cannot access the filter: http://s.apache.org/vJg.
> > > Cou

Re: Branching 2.5

2014-07-29 Thread Karthik Kambatla
Devs,

I created branch-2.5.0 and was trying to cut an RC, but ran into issues
with creating one. If anyone knows what is going on, please help me out. I
ll continue looking into it otherwise.

https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/24/console
is the build that failed. It appears the issue is because it can't find
Null.java. I run into the same issue locally as well, even with
branch-2.4.1. So, I wonder if I should be doing anything else to create the
RC instead?

Thanks
Karthik


On Sun, Jul 27, 2014 at 11:09 AM, Zhijie Shen  wrote:

> I've just committed YARN-2247, which is the last 2.5 blocker from YARN.
>
>
> On Sat, Jul 26, 2014 at 5:02 AM, Karthik Kambatla 
> wrote:
>
> > A quick update:
> >
> > All remaining blockers are on the verge of getting committed. Once that
> is
> > done, I plan to cut a branch for 2.5.0 and get an RC out hopefully this
> > coming Monday.
> >
> >
> > On Fri, Jul 25, 2014 at 12:32 PM, Andrew Wang 
> > wrote:
> >
> > > One thing I forgot, the release note activities are happening at
> > > HADOOP-10821. If you have other things you'd like to see mentioned,
> feel
> > > free to leave a comment on the JIRA and I'll try to include it.
> > >
> > > Thanks,
> > > Andrew
> > >
> > >
> > > On Fri, Jul 25, 2014 at 12:28 PM, Andrew Wang <
> andrew.w...@cloudera.com>
> > > wrote:
> > >
> > > > I just went through and fixed up the HDFS and Common CHANGES.txt for
> > > 2.5.0.
> > > >
> > > > As a friendly reminder, please try to put things under the correct
> > > section
> > > > :) We have subsections for the xattr changes in HDFS-2006 and
> > > HADOOP-10514,
> > > > and there were some unrelated JIRAs appended to the end.
> > > >
> > > > I'd also encourage committers to be more liberal with their use of
> the
> > > NEW
> > > > FEATURES section. I'm helping Karthik write up the 2.5 release notes,
> > and
> > > > I'm using NEW FEATURES to fill it out. When looking through the JIRA
> > list
> > > > though, I decided to promote things like the SNN/DN/JN webUI
> > > improvements,
> > > > the HCFS specification work, and OIV read-only WebHDFS access to new
> > > > features. One rule-of-thumb, if a feature required an umbrella JIRA,
> > put
> > > > the umbrella under NEW FEATURES when it's resolved.
> > > >
> > > > Thanks,
> > > > Andrew
> > > >
> > > >
> > > > On Wed, Jul 16, 2014 at 7:59 PM, Wangda Tan 
> > wrote:
> > > >
> > > >> Thanks Tsuyoshi for pointing me this,
> > > >>
> > > >> Wangda
> > > >>
> > > >>
> > > >> On Thu, Jul 17, 2014 at 10:30 AM, Tsuyoshi OZAWA <
> > > >> ozawa.tsuyo...@gmail.com>
> > > >> wrote:
> > > >>
> > > >> > Hi Wangda,
> > > >> >
> > > >> > The following link is same link as Karthik mentioned:
> > > >> >
> > > >> >
> > > >> >
> > > >>
> > >
> >
> https://issues.apache.org/jira/browse/YARN-2247?jql=project%20in%20(Hadoop%2C%20HDFS%2C%20YARN%2C%20%22Hadoop%20Map%2FReduce%22)%20AND%20resolution%20%3D%20Unresolved%20AND%20%22Target%20Version%2Fs%22%20%3D%202.5.0%20AND%20priority%20in%20(Blocker)
> > > >> >
> > > >> > Or, please access to http://goo.gl/FX3iWp
> > > >> >
> > > >> > Thanks,
> > > >> > - Tsuyoshi
> > > >> >
> > > >> > On Thu, Jul 17, 2014 at 10:55 AM, Zhijie Shen <
> > zs...@hortonworks.com>
> > > >> > wrote:
> > > >> > > I raised YARN-2247 as the blocker of 2.5.0.
> > > >> > >
> > > >> > >
> > > >> > > On Thu, Jul 17, 2014 at 9:42 AM, Wangda Tan <
> wheele...@gmail.com>
> > > >> wrote:
> > > >> > >
> > > >> > >> Hi Karthik,
> > > >> > >> I found I cannot access the filter: http://s.apache.org/vJg.
> > Could
> > > >> you
> > > >> > >> please check its permission? I'd like to know if there's any
> > > related
> > > >> > issues
> > > >> > >> to me. :)
> > > >> > >>
> > > >> > >> Thanks,
> > > >> > >> Wangda
> > > >> > >>
> > > >> > >>
> > > >> > >> On Thu, Jul 17, 2014 at 5:54 AM, Karthik Kambatla <
> > > >> ka...@cloudera.com>
> > > >> > >> wrote:
> > > >> > >>
> > > >> > >> > We are down to 4 blockers and looks like they are all
> actively
> > > >> being
> > > >> > >> worked
> > > >> > >> > on. Please reconsider marking new JIRAs as blockers.
> > > >> > >> >
> > > >> > >> > Thanks
> > > >> > >> > Karthik
> > > >> > >> >
> > > >> > >> > PS: I moved out a couple of JIRAs that didn't seem like true
> > > >> blockers
> > > >> > to
> > > >> > >> > 2.6.
> > > >> > >> >
> > > >> > >> >
> > > >> > >> > On Wed, Jul 9, 2014 at 11:43 AM, Karthik Kambatla <
> > > >> ka...@cloudera.com
> > > >> > >
> > > >> > >> > wrote:
> > > >> > >> >
> > > >> > >> > > Folks,
> > > >> > >> > >
> > > >> > >> > > We have 10 blockers for 2.5. Can the people working on them
> > > >> revisit
> > > >> > and
> > > >> > >> > > see if they are really blockers. If they are, can we try to
> > get
> > > >> > them in
> > > >> > >> > > soon? It would be nice to get an RC out the end of this
> week
> > or
> > > >> at
> > > >> > >> least
> > > >> > >> > > early next week?
> > > >> > >> > >
> > > >> > >> > > Thanks
> > > >> > >> > > Karthik
> > > >> > >> > >
> > > >> > >> > 

[jira] [Resolved] (HADOOP-6636) New hadoop version command to show which NameNode and JobTracker hosts are associated with the client node .

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6636.
--

Resolution: Fixed

org.apache.hadoop.util.VersionInfo was added at some point. Closing.

> New hadoop version command to show which NameNode and JobTracker hosts are 
> associated with the client node .
> 
>
> Key: HADOOP-6636
> URL: https://issues.apache.org/jira/browse/HADOOP-6636
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Ravi Phulari
>Priority: Minor
> Attachments: HADOOP-6636.patch
>
>
> Currently there is no command to show NameNode and JobTracker associated with 
> client node.
> Work around used for this is to grep for $HADOOP_CONF_DIR/hdfs-site.xml and 
> mapred-site.xml . 
> This process is very tedious when more than 1 hadoop clusters are configured. 
> We can display this information in *hadoop version*  command
> I will be uploading patch which shows like NN & JT information in version 
> command as shown below.
> {noformat}
> [rphulari@statepick-lm]> bin/hadoop version
> Hadoop 0.20.100.0-SNAPSHOT
> Subversion git://local-lm/ on branch H20s -r 
> af2da4db0328975f929c8ece9aa8d3079fa60c4a
> Compiled by rphulari on Fri Mar 26 18:20:35 PDT 2010
> Name Node Host hdfs://localhost
> Job Tracker Host localhost 
> {noformat} 
> *dfsadmin -report is restricted to admin only and it shows only datanodes and 
> does not include NN and JT information*



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10901) provide un-camelCased versions of shell commands

2014-07-29 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-10901:
-

 Summary: provide un-camelCased versions of shell commands
 Key: HADOOP-10901
 URL: https://issues.apache.org/jira/browse/HADOOP-10901
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer



There is a heavy disposition to do camelCase subcommands because it reflects 
what is in the Java code.  However, it is very counter to the shell.  We should 
replace the case options to accept both the camelCase and the fully lowercase 
options.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6629) versions of dependencies should be specified in a single place

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6629.
--

Resolution: Fixed

Closing this due to mavenization.

> versions of dependencies should be specified in a single place
> --
>
> Key: HADOOP-6629
> URL: https://issues.apache.org/jira/browse/HADOOP-6629
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Doug Cutting
>Assignee: Doug Cutting
> Attachments: HADOOP-6629.patch, HADOOP-6629.patch
>
>
> Currently the Maven POM file is generated from a template file that includes 
> the versions of all the libraries we depend on.  The versions of these 
> libraries are also present in ivy/libraries.properties, so that, when a 
> library is updated, it must be updated in two places, which is error-prone.  
> We should instead only specify library versions in a single place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6619) Improve error messages when logging in from keytab

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6619.
--

Resolution: Duplicate

Yes, it is.

Closing as a dupe.

> Improve error messages when logging in from keytab
> --
>
> Key: HADOOP-6619
> URL: https://issues.apache.org/jira/browse/HADOOP-6619
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>
> The error messages when either the user is not found in the keytab or the 
> keytab isn't readable is really bad.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6615) make SecurityAudit log to be created ONLY on the server side.

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6615.
--

Resolution: Fixed

Fixed.

... and essentially re-done in HADOOP-9902.

> make SecurityAudit log to be created ONLY on the server side.
> -
>
> Key: HADOOP-6615
> URL: https://issues.apache.org/jira/browse/HADOOP-6615
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Boris Shkolnik
>Assignee: Boris Shkolnik
> Attachments: HADOOP-6615-1-BP20.patch, HADOOP-6615-BP20.patch
>
>
> default log4j.properties will have this SecurityLogAudit set to console.
> hadoop-daemon.sh will move it to a DRFA file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6592) Scheduler: Pause button desirable

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6592.
--

Resolution: Won't Fix

> Scheduler: Pause button desirable
> -
>
> Key: HADOOP-6592
> URL: https://issues.apache.org/jira/browse/HADOOP-6592
> Project: Hadoop Common
>  Issue Type: Wish
>Reporter: Adam Kramer
>Priority: Minor
>
> It would be lovely if, from the jobtracker page, I could click a button 
> that's not "kill" or "fail" but ..."pause."
> The pause button would stop a certain task from starting any more mappers or 
> reducers. They would all wait in the "pending" stage until the job is 
> "un-paused." Currently-running tasks would continue to run, and then 
> complete, thus freeing the resources for other jobs.
> This would help a lot for systems (esp. Hive) in which one or two jobs are 
> hogging a lot of mappers or reducers. The ones they have would finish, and 
> then other jobs could "catch up," and then they could be unpaused for a 
> while. This would also allow for user-level throttling of their jobs in 
> instances where they need a lot of resources but have the time to spare.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6602) ClassLoader (Configuration#setClassLoader) in new Job API (0.20) does not work

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6602.
--

Resolution: Fixed

> ClassLoader (Configuration#setClassLoader) in new Job API (0.20) does not work
> --
>
> Key: HADOOP-6602
> URL: https://issues.apache.org/jira/browse/HADOOP-6602
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 0.20.1
> Environment: Cloudera Hadoop 0.20.1+152
>Reporter: William Kinney
>
> new Job/Configuration API's setClassLoader (Configuration#setClassLoader) 
> gets overwritten w/ {{Thread.currentThread().getContextClassLoader()}} when 
> invoking Job#submit. 
> Upon class to Job#submit, JobClient#submitJobInternal invokes {{JobContext 
> context = new JobContext(job, jobId);}}, which in the constructor for 
> org.apache.hadoop.mapreduce.JobContext, wraps Job w/ new JobConf and 
> therefore overwrites set classLoader member @ Configuration via a init block 
> w/ {{classLoader =  Thread.currentThread().getContextClassLoader();}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6597) additional source only release tarball

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6597.
--

Resolution: Fixed

> additional source only release tarball
> --
>
> Key: HADOOP-6597
> URL: https://issues.apache.org/jira/browse/HADOOP-6597
> Project: Hadoop Common
>  Issue Type: Wish
>Reporter: Thomas Koch
>Priority: Trivial
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> One common annoyance when packaging java applications for a Free Software 
> distribution is the necessity to repackage the upstream tarball. The 
> repackaging is necessary, because Debian may only distribute binary files 
> build from source that's also available from Debian.
> So we build the jar/war files ourselfes to make sure there's nothing we don't 
> have the sources for.
> It would take one (annoying and time consuming) step less for packagers, if 
> java upstream projects would release an additional tarball without any binary 
> files or third party code.
> I'm asking you first, because many other projects (like zookeeper) took or 
> take hadoop as an example for their build infrastructure.
> For your orientation, these are the patterns that I used to filter the hadoop 
> tarball: (Usable with tar --exclude)
> "*.jar",
> "uming.*",
> "prototype.js",
> "config.sub",
> "config.guess",
> "ltmain.sh",
> "Makefile.in",
> "configure",
> "aclocal.m4",
> "config.h.in",
> "install-sh",
> "autom4te.cache",
> "depcomp",
> "missing",
> "pipes/compile",
> "src/contrib/eclipse-plugin/resources/*.jpg",
> "src/contrib/eclipse-plugin/resources/*.png",
> "src/contrib/eclipse-plugin/resources/*.gif",
> "hadoop-0.20.1/src/core/org/apache/hadoop/record/compiler/generated/*.java",
> "hadoop-0.20.1/src/docs/cn/build",
> "hadoop-0.20.1/c++",
> "hadoop-0.20.1/contrib",
> "hadoop-0.20.1/lib/native",
> "hadoop-0.20.1/librecordio",
> "hadoop-0.20.1/src/contrib/thriftfs/gen-*",
> "hadoop-0.20.1/docs",
> There were different reasons why stuff needed to be filtered:
> - unclear license (uming.*)
> - unclear origin (images in the eclipse plugin)
> - precompiled documentation / code / hadoop binaries
> - pregenerated C/C++ automake files
> - third party libraries (prototype.js, lib/*.jar)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6574) Commit HADOOP-6414:Add command line help for -expunge command. to Hadoop 0.20

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6574.
--

Resolution: Fixed

> Commit HADOOP-6414:Add command line help for -expunge command.to Hadoop 
> 0.20 
> -
>
> Key: HADOOP-6574
> URL: https://issues.apache.org/jira/browse/HADOOP-6574
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 0.20.0
>Reporter: Ravi Phulari
>Assignee: Ravi Phulari
>Priority: Trivial
> Attachments: HADOOP-6574.patch
>
>
> HADOOP-6414 : Add command line help for -expunge command. needs to be 
> committed to Hadoop 0.20.
> Opening this new Jira to address this issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10686) Writables are not always configured

2014-07-29 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved HADOOP-10686.
---

Resolution: Fixed

Looks like I screwed up the merge to branch-2. I merged the changes to branch-2 
along with YARN-2155 and hence are hidden under that commit. I was hoping to 
add a dummy commit to capture that information, but looks like that isn't 
possible.  


> Writables are not always configured
> ---
>
> Key: HADOOP-10686
> URL: https://issues.apache.org/jira/browse/HADOOP-10686
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Abraham Elmahrek
>Assignee: Abraham Elmahrek
> Fix For: 2.5.0
>
> Attachments: MAPREDUCE-5914.0.patch, MAPREDUCE-5914.1.patch, 
> MAPREDUCE-5914.2.patch
>
>
> Seeing the following exception:
> {noformat}
> java.lang.Exception: java.lang.NullPointerException
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.sqoop.job.io.SqoopWritable.readFields(SqoopWritable.java:59)
>   at 
> org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:129)
>   at 
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.compare(MapTask.java:1248)
>   at org.apache.hadoop.util.QuickSort.fix(QuickSort.java:35)
>   at org.apache.hadoop.util.QuickSort.sortInternal(QuickSort.java:87)
>   at org.apache.hadoop.util.QuickSort.sort(QuickSort.java:63)
>   at 
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1582)
>   at 
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1467)
>   at 
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:699)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:769)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> {noformat}
> It turns out that WritableComparator does not configure Writable objects 
> :https://github.com/apache/hadoop-common/blob/branch-2.3.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java.
>  This is during the sort phase for an MR job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6528) Jetty returns -1 resulting in Hadoop masters / slaves to fail during startup.

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6528.
--

Resolution: Not a Problem

Hello from the future!

At this point, I'm assuming this issue has been fixed in one way or another.  I 
know that as of this writing, parts of Hadoop are using jetty 6.1.26 and other 
parts are using netty 3.6.2.  Additionally, JRE 6 has been EOLed by Oracle.

I think I'll close this out as Not A Problem, because as far as I'm aware, this 
is no longer an active issue for Hadoop.

> Jetty returns -1 resulting in Hadoop masters / slaves to fail during startup.
> -
>
> Key: HADOOP-6528
> URL: https://issues.apache.org/jira/browse/HADOOP-6528
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hemanth Yamijala
> Attachments: jetty-server-failure.log
>
>
> A recent test failure on Hudson seems to indicate that Jetty's 
> Server.getConnectors()[0].getLocalPort() is returning -1 in the 
> HttpServer.getPort() method. When this happens, Hadoop masters / slaves that 
> use Jetty fail to startup correctly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6525) Support secure clients connecting to insecure servers

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6525.
--

Resolution: Duplicate

I believe this is the basics of HDFS-3905 and its related JIRAs.  So I'll close 
this as a dupe. If anyone feels otherwise, feel free to open a new JIRA.

> Support secure clients connecting to insecure servers
> -
>
> Key: HADOOP-6525
> URL: https://issues.apache.org/jira/browse/HADOOP-6525
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>
> It would be useful to allow clients that have security turned on to talk to 
> servers that have security turned off. (This does *not* mean protocol 
> compatibility between versions, but rather with the same version with 
> different configurations.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HADOOP-10686) Writables are not always configured

2014-07-29 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reopened HADOOP-10686:
---


Looks like I messed up backporting to branch-2. 

> Writables are not always configured
> ---
>
> Key: HADOOP-10686
> URL: https://issues.apache.org/jira/browse/HADOOP-10686
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Abraham Elmahrek
>Assignee: Abraham Elmahrek
> Fix For: 2.5.0
>
> Attachments: MAPREDUCE-5914.0.patch, MAPREDUCE-5914.1.patch, 
> MAPREDUCE-5914.2.patch
>
>
> Seeing the following exception:
> {noformat}
> java.lang.Exception: java.lang.NullPointerException
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.sqoop.job.io.SqoopWritable.readFields(SqoopWritable.java:59)
>   at 
> org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:129)
>   at 
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.compare(MapTask.java:1248)
>   at org.apache.hadoop.util.QuickSort.fix(QuickSort.java:35)
>   at org.apache.hadoop.util.QuickSort.sortInternal(QuickSort.java:87)
>   at org.apache.hadoop.util.QuickSort.sort(QuickSort.java:63)
>   at 
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1582)
>   at 
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1467)
>   at 
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:699)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:769)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> {noformat}
> It turns out that WritableComparator does not configure Writable objects 
> :https://github.com/apache/hadoop-common/blob/branch-2.3.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java.
>  This is during the sort phase for an MR job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6514) Failed to run pseudo-distributed cluster on Win XP

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6514.
--

Resolution: Won't Fix

> Failed to run pseudo-distributed cluster on Win XP
> --
>
> Key: HADOOP-6514
> URL: https://issues.apache.org/jira/browse/HADOOP-6514
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.1
> Environment: Windows XP
>Reporter: Yura Taras
> Attachments: core-site.xml, grep.log, hdfs-site.xml, mapred-site.xml
>
>
> Failed to run pseudo-distributed cluster on Win XP, while standalone mode 
> works fine. 
> Steps to reproduce:
> 1. Install cygwin+ssh on WinXp, download and unpack hadoop
> 2. Set up java_home in hadoop-env.sh
> 3. Adjust config according to sample attached files
> 4. Run following command:
> bin/hadoop namenode -format  && bin/start-all.sh && bin/hadoop fs -put conf 
> input && bin/hadoop jar hadoop-0.20.1-examples.jar grep input output 
> 'dfs[a-z.]'
> Job fails, sample log is attached



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6499) Standardize when we return false and when we throw IOException in FileSystem API

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6499.
--

Resolution: Won't Fix

> Standardize when we return false and when we throw IOException in FileSystem 
> API
> 
>
> Key: HADOOP-6499
> URL: https://issues.apache.org/jira/browse/HADOOP-6499
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zheng Shao
>
> Currently most of the methods in Hadoop FileSystem has 2 ways of returning 
> errors:
> 1. Return false
> 2. throw an IOException
> We should standardize what should happen in what case, so that the caller can 
> retry/fail accordingly.
> The standard can be added to javadoc of FileSystem, then we need to verify 
> all FileSystem implementation follow the same standard.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6485) Trash fails on Windows

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6485.
--

Resolution: Fixed

Likely stale.

> Trash fails on Windows
> --
>
> Key: HADOOP-6485
> URL: https://issues.apache.org/jira/browse/HADOOP-6485
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.0
>Reporter: Konstantin Shvachko
>
> Using local file system on windows Trash tries to move file 
> "file:/C:/tmp/testTrash/foo" to "file:/C:/Documents and 
> Settings/shv/.Trash/Current/C:/tmp/testTrash/foo" with "C:" in the middle.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6483) Provide Hadoop as a Service based on standards

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6483.
--

Resolution: Incomplete

Interesting, but stale.

> Provide Hadoop as a Service based on standards
> --
>
> Key: HADOOP-6483
> URL: https://issues.apache.org/jira/browse/HADOOP-6483
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Yang Zhou
> Attachments: OGF27-HPCBPforHadoop.ppt, SC08-HPCBPforHadoop.ppt
>
>
> Hadoop as a Service provides a standards-based web services interface that 
> layers on top of Hadoop on Demand and allows Hadoop jobs to be submitted via 
> popular schedulers, such as Sun Grid Engine (SGE), Platform LSF, Microsoft 
> HPC Server 2008 etc., to local or remote Hadoop clusters.  This allows 
> multiple Hadoop clusters within an organization to be efficiently shared and 
> provides flexibility, allowing remote Hadoop clusters, offered as Cloud 
> services, to be used for experimentation and burst capacity. HaaS hides 
> complexity, allowing users to submit many types of compute or data intensive 
> work via a single scheduler without actually knowing where it will be done. 
> Additionally providing a standards-based front-end to Hadoop means that users 
> would be able to easily choose HaaS providers without being locked in, i.e. 
> via proprietary interfaces such as Amazon's map/reduce service.  
> Our HaaS implementation uses the OGF High Performance Computing Basic Profile 
> standard to define interoperable job submission descriptions and management 
> interfaces to Hadoop. It uses Hadoop on Demand to provision capacity. Our 
> HaaS implementation also supports files stage in/out with protocols like FTP, 
> SCP and GridFTP.
> Our HaaS implementation also provides a suit of RESTful interface which  
> compliant with HPC-BP.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6476) o.a.h.io.Text - setCapacity does not shrink size

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6476.
--

Resolution: Won't Fix

> o.a.h.io.Text - setCapacity does not shrink size 
> -
>
> Key: HADOOP-6476
> URL: https://issues.apache.org/jira/browse/HADOOP-6476
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Karthik K
> Attachments: HADOOP-6476.patch, HADOOP-6476.patch, HADOOP-6476.patch
>
>
> The internal byte array of o.a.h.io.Text does not shrink , if we set a 
> capacity that is less than the size of the internal byte array. 
> * If input length for setCapacity < length of byte array - then the byte 
> array is reset. 
> * Existing data is retained depending on if keepData variable is set. 
> * 4 new test cases added for various capacity sizes 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6470) JMX Context for Metrics

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6470.
--

Resolution: Fixed

> JMX Context for Metrics
> ---
>
> Key: HADOOP-6470
> URL: https://issues.apache.org/jira/browse/HADOOP-6470
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Dmytro Molkov
>Assignee: Dmytro Molkov
> Attachments: JMX.patch
>
>
> The way metrics are currently exposed to the JMX in the NameNode is not 
> helpful, since only the current counters in the record can be fetched and 
> without any context those number mean little.
> For example the number of files created equal to 150 only means that in the 
> last period there were 150 files created but when the new period will end is 
> unknown so fetching 150 again will either mean another 150 files or we are 
> fetching the same time period.
> One of the solutions for this problem will be to have a JMX context that will 
> accumulate the data (being child class of AbstractMetricsContext) and expose 
> different records to the JMX through custom MBeans. This way the information 
> fetched from the JMX will represent the state of things in a more meaningful 
> way.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6468) compile-core-test failing with java.lang.NoSuchMethodError: org.objectweb.asm.ClassWriter.

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6468.
--

Resolution: Fixed

> compile-core-test failing with java.lang.NoSuchMethodError: 
> org.objectweb.asm.ClassWriter.
> 
>
> Key: HADOOP-6468
> URL: https://issues.apache.org/jira/browse/HADOOP-6468
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.22.0
> Environment: Ant 1.7
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
>
> I'm filing this for something for the search engines to index, so when others 
> hit the problem, the solution is here. 
> hadoop-common's tests arent' compiling on one machine,  
> java.lang.NoSuchMethodError: org.objectweb.asm.ClassWriter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10900) CredentialShell args should use single-dash style

2014-07-29 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-10900:


 Summary: CredentialShell args should use single-dash style
 Key: HADOOP-10900
 URL: https://issues.apache.org/jira/browse/HADOOP-10900
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor


As was discussed in HADOOP-10793 related to KeyShell, we should standardize on 
single-dash flags for things in branch-2. CredentialShell also needs to be 
updated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6458) Website nightly developer links to non-existant Hudson page Hadoop-trunk

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6458.
--

Resolution: Fixed

almost certainly fixed by now.

> Website nightly developer links to non-existant Hudson page Hadoop-trunk
> 
>
> Key: HADOOP-6458
> URL: https://issues.apache.org/jira/browse/HADOOP-6458
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chris Wilkes
>Priority: Minor
>   Original Estimate: 0.25h
>  Remaining Estimate: 0.25h
>
> The http://hadoop.apache.org/common/ page links to 
> http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ for the nightly build 
> and that does not exist.  Probably should link to 
> http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Hadoop Code of Incompatible Changes

2014-07-29 Thread Sandy Ryza
Thanks Arpit!


On Tue, Jul 29, 2014 at 2:24 PM, Arpit Agarwal 
wrote:

> I cleared out the wiki page and left a forwarding link to the site docs.
> From a quick scan all the content is included in the site docs.
>
>
> On Tue, Jul 29, 2014 at 2:14 PM, Sandy Ryza 
> wrote:
>
> > Eli pointed out to me that this is the up-to-date compatibility guide:
> >
> >
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html
> >
> > Thanks,
> > Sandy
> >
> >
> > On Tue, Jul 29, 2014 at 9:39 AM, Sandy Ryza 
> > wrote:
> >
> > > Hi Zhijie,
> > >
> > > The Hadoop compatibility guide mentions this as "semantic
> compatibility":
> > > http://wiki.apache.org/hadoop/Compatibility
> > >
> > > My interpretation of the section is that we can't change the behavior
> of
> > > public APIs unless we're fixing buggy behavior.  If the change could
> > break
> > > an existing application that's behaving reasonably with respect to the
> > old
> > > API, it's an incompatible change.
> > >
> > > -Sandy
> > >
> > >
> > >
> > > On Tue, Jul 29, 2014 at 9:26 AM, Zhijie Shen 
> > > wrote:
> > >
> > >> Hi folks,
> > >>
> > >> Recently we have a conversation on YARN-2209 about the incompatible
> > >> changes
> > >> over releases. For those API changes that will break binary
> > compatibility,
> > >> source compatibility towards the existing API users, we've already
> had a
> > >> rather clear picture about what we should do. However, YARN-2209 has
> > >> introduced another case which I'm not quite sure about, which is kind
> of
> > >> *logic
> > >> incompatibility*.
> > >>
> > >> In detail, an ApplicationMasterProtocol API is going to throw an
> > exception
> > >> which is not expected before. The exception is a sub-class of
> > >> YarnException, such that it doesn't need any method signature change,
> > and
> > >> won't break any binary/source compatibility. However, the exception is
> > not
> > >> expected before, but needs to be treated specially at the AM side. Not
> > >> being aware of the newly introduced exception, the existing YARN
> > >> applications' AM may not handle it exception properly, and is at the
> > risk
> > >> of being broken on a new YARN cluster after this change.
> > >>
> > >> An additional thought around this problem is that the change of what
> > >> exception is to throw under what situation may be considered as a
> *soft
> > >> *method
> > >> signature change, because we're supposed to write this javadoc to tell
> > the
> > >> users (though we didn't it well in Hadoop), and users refer to it to
> > guide
> > >> how to handle the exception.
> > >>
> > >> In a more generalized form, let's assume we have a method, which
> behaves
> > >> as
> > >> A, in release 1.0. However, in release 2.0, the method signature has
> > kept
> > >> the same, while its behavior is altered from A to B. A and B are
> > different
> > >> behaviors. In this case, do we consider it as an incompatible change?
> > >>
> > >> I think it's somewhat a common issue, such that I raise it on the
> > mailing
> > >> list. Please share your ideas.
> > >>
> > >> Thanks,
> > >> Zhijie
> > >>
> > >> --
> > >> Zhijie Shen
> > >> Hortonworks Inc.
> > >> http://hortonworks.com/
> > >>
> > >> --
> > >> CONFIDENTIALITY NOTICE
> > >> NOTICE: This message is intended for the use of the individual or
> entity
> > >> to
> > >> which it is addressed and may contain information that is
> confidential,
> > >> privileged and exempt from disclosure under applicable law. If the
> > reader
> > >> of this message is not the intended recipient, you are hereby notified
> > >> that
> > >> any printing, copying, dissemination, distribution, disclosure or
> > >> forwarding of this communication is strictly prohibited. If you have
> > >> received this communication in error, please contact the sender
> > >> immediately
> > >> and delete it from your system. Thank You.
> > >>
> > >
> > >
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>


[jira] [Resolved] (HADOOP-6457) Set Hadoop User/Group by System properties or environment variables

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6457.
--

Resolution: Duplicate

> Set Hadoop User/Group by System properties or environment variables
> ---
>
> Key: HADOOP-6457
> URL: https://issues.apache.org/jira/browse/HADOOP-6457
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: issei yoshida
> Attachments: 6457.patch
>
>
> Hadoop User/Group can be set by System properties or environment variables.
> For example, in environment variables,
> export HADOOP_USER=test
> export HADOOP_GROUP=user
> or in your MapReduce,
> System.setProperty("hadoop.user.name", "test");
> System.setProperty("hadoop.group.name", "user");



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Hadoop Code of Incompatible Changes

2014-07-29 Thread Arpit Agarwal
I cleared out the wiki page and left a forwarding link to the site docs.
>From a quick scan all the content is included in the site docs.


On Tue, Jul 29, 2014 at 2:14 PM, Sandy Ryza  wrote:

> Eli pointed out to me that this is the up-to-date compatibility guide:
>
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html
>
> Thanks,
> Sandy
>
>
> On Tue, Jul 29, 2014 at 9:39 AM, Sandy Ryza 
> wrote:
>
> > Hi Zhijie,
> >
> > The Hadoop compatibility guide mentions this as "semantic compatibility":
> > http://wiki.apache.org/hadoop/Compatibility
> >
> > My interpretation of the section is that we can't change the behavior of
> > public APIs unless we're fixing buggy behavior.  If the change could
> break
> > an existing application that's behaving reasonably with respect to the
> old
> > API, it's an incompatible change.
> >
> > -Sandy
> >
> >
> >
> > On Tue, Jul 29, 2014 at 9:26 AM, Zhijie Shen 
> > wrote:
> >
> >> Hi folks,
> >>
> >> Recently we have a conversation on YARN-2209 about the incompatible
> >> changes
> >> over releases. For those API changes that will break binary
> compatibility,
> >> source compatibility towards the existing API users, we've already had a
> >> rather clear picture about what we should do. However, YARN-2209 has
> >> introduced another case which I'm not quite sure about, which is kind of
> >> *logic
> >> incompatibility*.
> >>
> >> In detail, an ApplicationMasterProtocol API is going to throw an
> exception
> >> which is not expected before. The exception is a sub-class of
> >> YarnException, such that it doesn't need any method signature change,
> and
> >> won't break any binary/source compatibility. However, the exception is
> not
> >> expected before, but needs to be treated specially at the AM side. Not
> >> being aware of the newly introduced exception, the existing YARN
> >> applications' AM may not handle it exception properly, and is at the
> risk
> >> of being broken on a new YARN cluster after this change.
> >>
> >> An additional thought around this problem is that the change of what
> >> exception is to throw under what situation may be considered as a *soft
> >> *method
> >> signature change, because we're supposed to write this javadoc to tell
> the
> >> users (though we didn't it well in Hadoop), and users refer to it to
> guide
> >> how to handle the exception.
> >>
> >> In a more generalized form, let's assume we have a method, which behaves
> >> as
> >> A, in release 1.0. However, in release 2.0, the method signature has
> kept
> >> the same, while its behavior is altered from A to B. A and B are
> different
> >> behaviors. In this case, do we consider it as an incompatible change?
> >>
> >> I think it's somewhat a common issue, such that I raise it on the
> mailing
> >> list. Please share your ideas.
> >>
> >> Thanks,
> >> Zhijie
> >>
> >> --
> >> Zhijie Shen
> >> Hortonworks Inc.
> >> http://hortonworks.com/
> >>
> >> --
> >> CONFIDENTIALITY NOTICE
> >> NOTICE: This message is intended for the use of the individual or entity
> >> to
> >> which it is addressed and may contain information that is confidential,
> >> privileged and exempt from disclosure under applicable law. If the
> reader
> >> of this message is not the intended recipient, you are hereby notified
> >> that
> >> any printing, copying, dissemination, distribution, disclosure or
> >> forwarding of this communication is strictly prohibited. If you have
> >> received this communication in error, please contact the sender
> >> immediately
> >> and delete it from your system. Thank You.
> >>
> >
> >
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Hadoop Code of Incompatible Changes

2014-07-29 Thread Sandy Ryza
Eli pointed out to me that this is the up-to-date compatibility guide:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html

Thanks,
Sandy


On Tue, Jul 29, 2014 at 9:39 AM, Sandy Ryza  wrote:

> Hi Zhijie,
>
> The Hadoop compatibility guide mentions this as "semantic compatibility":
> http://wiki.apache.org/hadoop/Compatibility
>
> My interpretation of the section is that we can't change the behavior of
> public APIs unless we're fixing buggy behavior.  If the change could break
> an existing application that's behaving reasonably with respect to the old
> API, it's an incompatible change.
>
> -Sandy
>
>
>
> On Tue, Jul 29, 2014 at 9:26 AM, Zhijie Shen 
> wrote:
>
>> Hi folks,
>>
>> Recently we have a conversation on YARN-2209 about the incompatible
>> changes
>> over releases. For those API changes that will break binary compatibility,
>> source compatibility towards the existing API users, we've already had a
>> rather clear picture about what we should do. However, YARN-2209 has
>> introduced another case which I'm not quite sure about, which is kind of
>> *logic
>> incompatibility*.
>>
>> In detail, an ApplicationMasterProtocol API is going to throw an exception
>> which is not expected before. The exception is a sub-class of
>> YarnException, such that it doesn't need any method signature change, and
>> won't break any binary/source compatibility. However, the exception is not
>> expected before, but needs to be treated specially at the AM side. Not
>> being aware of the newly introduced exception, the existing YARN
>> applications' AM may not handle it exception properly, and is at the risk
>> of being broken on a new YARN cluster after this change.
>>
>> An additional thought around this problem is that the change of what
>> exception is to throw under what situation may be considered as a *soft
>> *method
>> signature change, because we're supposed to write this javadoc to tell the
>> users (though we didn't it well in Hadoop), and users refer to it to guide
>> how to handle the exception.
>>
>> In a more generalized form, let's assume we have a method, which behaves
>> as
>> A, in release 1.0. However, in release 2.0, the method signature has kept
>> the same, while its behavior is altered from A to B. A and B are different
>> behaviors. In this case, do we consider it as an incompatible change?
>>
>> I think it's somewhat a common issue, such that I raise it on the mailing
>> list. Please share your ideas.
>>
>> Thanks,
>> Zhijie
>>
>> --
>> Zhijie Shen
>> Hortonworks Inc.
>> http://hortonworks.com/
>>
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified
>> that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender
>> immediately
>> and delete it from your system. Thank You.
>>
>
>


[jira] [Resolved] (HADOOP-6406) hadoop-core.pom contains hardcoded and nonexistent commons-cli version and jetty groupId/artifactId mixup

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6406.
--

Resolution: Not a Problem

> hadoop-core.pom contains hardcoded and nonexistent commons-cli version and 
> jetty groupId/artifactId mixup
> -
>
> Key: HADOOP-6406
> URL: https://issues.apache.org/jira/browse/HADOOP-6406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Marcel  May
>Priority: Trivial
> Attachments: HADOOP-6406.patch
>
>
> hadoop-core.pom in trunk contains
> a) hardcoded non-existing commons-cli version "2.0-20070823"
> b) jetty groupId/artifactId mixup : the given artifactId is the groupId, the 
> groupId is the artifactId



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6370) Contrib project ivy dependencies are not included in binary target

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6370.
--

Resolution: Won't Fix

> Contrib project ivy dependencies are not included in binary target
> --
>
> Key: HADOOP-6370
> URL: https://issues.apache.org/jira/browse/HADOOP-6370
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Aaron Kimball
>Assignee: Aaron Kimball
>Priority: Critical
> Attachments: HADOOP-6370.2.patch, HADOOP-6370.patch
>
>
> Only Hadoop's own library dependencies are promoted to ${build.dir}/lib; any 
> libraries required by contribs are not redistributed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6446) Deprecate FileSystem

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6446.
--

Resolution: Won't Fix

Closing this as Won't Fix
.

FILESYSTEM HEALTHY

> Deprecate FileSystem
> 
>
> Key: HADOOP-6446
> URL: https://issues.apache.org/jira/browse/HADOOP-6446
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Reporter: Eli Collins
>
> Here's a jira to track deprecating FileSystem. There are two high-level tasks:
> 1. Moving clients in common and mapreduce onto FileContext. I think this is 
> currently blocked on HADOOP-6356 and HADOOP-6361. Anything else preventing 
> them from being moved over? Any clients we could move over today?
> 2. Moving file system implementations onto AbstractFileSystem. Currently Hdfs 
> and FilterFS (and local and checksum) extend AFS. RawLocalFileSystem uses the 
> delegator. S3, ftp and others need to be moved over. Don't think there's 
> anything blocking this stuff.
> 3. Stabilize FileContext



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6368) hadoop classpath is too long

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6368.
--

Resolution: Duplicate

Closing this as part of the HADOOP-9902 work.

> hadoop classpath is too long
> 
>
> Key: HADOOP-6368
> URL: https://issues.apache.org/jira/browse/HADOOP-6368
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.21.0
>Reporter: Tsz Wo Nicholas Sze
>
> After combined common, hdfs and mapreduce, the hadoop classpath in my machine 
> has more than 1 characters.  There are also redundant jars in the 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6365) distributed cache doesn't work with HDFS and another file system

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6365.
--

Resolution: Fixed

This looks fixed.

> distributed cache doesn't work with HDFS and another file system
> 
>
> Key: HADOOP-6365
> URL: https://issues.apache.org/jira/browse/HADOOP-6365
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: filecache
>Affects Versions: 0.20.1
> Environment: CentOS
>Reporter: Marc Colosimo
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> This is a continuation of http://issues.apache.org/jira/browse/HADOOP-5635 
> (JIRA wouldn't let me edit that one). I found another issue with 
> DistributedCache using something besides HDFS. In my case I have TWO active 
> file systems, with HDFS being the default file system.
> My fix includes two additional changes (from HADOOP-5635) to get it to work 
> with another filesystem scheme (plus the changes from the original patch). 
> I've tested this an it works with my code on HDFS with another filesystem. I 
> have similar changes to mapreduce.filecacheTaskDistributedCacheManager and 
> TrackerDistributedCacheManager (0.22.0).
> Basically, URI.getPath() is called instead of URI.toString(). toString 
> returns the scheme plus path which is important in finding the file to copy 
> (getting the file system). Otherwise it searches the default file system (in 
> this case HDFS) for the file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6363) Move Seekable and PositionedReadable interfaces from o.a.h.fs to o.a.h.io package

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6363.
--

Resolution: Won't Fix

> Move Seekable and PositionedReadable interfaces from o.a.h.fs to o.a.h.io 
> package
> -
>
> Key: HADOOP-6363
> URL: https://issues.apache.org/jira/browse/HADOOP-6363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, io
>Reporter: Arun C Murthy
>
> Currently Seekable and PositionedReadable live in o.a.h.fs package. They 
> really do not belong there, I propose we move them to the o.a.h.io package.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6349) Implement FastLZCodec for fastlz/lzo algorithm

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6349.
--

Resolution: Won't Fix

> Implement FastLZCodec for fastlz/lzo algorithm
> --
>
> Key: HADOOP-6349
> URL: https://issues.apache.org/jira/browse/HADOOP-6349
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io
>Reporter: William Kinney
> Attachments: HADOOP-6349-TestFastLZCodec.patch, HADOOP-6349.patch, 
> TestCodecPerformance.java, TestCodecPerformance.java, hadoop-6349-1.patch, 
> hadoop-6349-2.patch, hadoop-6349-3.patch, hadoop-6349-4.patch, 
> testCodecPerfResults.tsv
>
>
> Per  [HADOOP-4874|http://issues.apache.org/jira/browse/HADOOP-4874], FastLZ 
> is a good (speed, license) alternative to LZO. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6340) Change build/test/clean targets to prevent tar target from including clover instrumented code

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6340.
--

Resolution: Won't Fix

> Change build/test/clean targets to prevent tar target from including clover 
> instrumented code
> -
>
> Key: HADOOP-6340
> URL: https://issues.apache.org/jira/browse/HADOOP-6340
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.20.1
>Reporter: Lee Tucker
>Assignee: Giridharan Kesavan
>
> Currently the clover targets in build.xml cause the code generated in the 
> build root to contain clover instrumented code.   Should the tar target be 
> called, this instrumented code is packaged up and made part of what would be 
> delivered to the grid for use.   Installing cloverized code on real clusters 
> is generally a very bad idea unless it's done with significant upfront 
> thought.   
> I propose that we alter the targets so that when clover is enabled, the 
> compile target does two passes.   The first would generate the uninstrumented 
> code in the standard build root.  The second pass would then generate clover 
> instrumented code in a build-clover build root.  That way, the tar target 
> would only pick up uninstrumented code.   I strongly suggest a 2 pass 
> compile, and not a different target, because you never want the two sets of 
> objects to be out of sync.  (For instance, you might want to run the clover 
> instrumented unit tests, and then package the uninstrumented code to be 
> delivered to the next step in your QA/Release process.)
> The test targets would also need to be altered.  I'd propose that the test 
> results still be placed in their current location in the build root, 
> regardless of whether the tests were run with instrumented or uninstrumented 
> code.   This means that when clover is enabled, that the test target would 
> execute against the objects in build-clover, but report results in build.  
> (This would allow currently existing test infrastructure to continue to 
> report results without modification.)
> The clean target(s) would also need to be enhanced to clean out both build 
> roots.
> The only drawback to this approach I can see is that if you wanted to produce 
> instrumented code to be delivered to a real grid, you'd have to create the 
> package from build-clover instead of build manually, or we'd have to add a 
> "tar-with-clover" target that did it for you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6330) Integrating IBM General Parallel File System implementation of Hadoop Filesystem interface

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6330.
--

Resolution: Won't Fix

Supplemental FileSystem/FileContext code comes from their respective projects 
instead of Hadoop now.

> Integrating IBM General Parallel File System implementation of Hadoop 
> Filesystem interface
> --
>
> Key: HADOOP-6330
> URL: https://issues.apache.org/jira/browse/HADOOP-6330
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
> Environment: Where GPFS is supported
>Reporter: Reshu Jain
>
> GPFS is a high performance parallel file system for GNU/Linux clusters. This 
> patch contains the implementation of the Hadoop Filesystem Interface. There 
> is dependency on the availability of GPFS on the host where the JNI 
> implementation can be built.
> The patch consists of fs/gpfs classes and the JNI module in c++/libgpfs. 
> http://www-03.ibm.com/systems/clusters/software/gpfs/index.html



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6319) Capacity reporting incorrect on Solaris

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6319.
--

Resolution: Won't Fix

> Capacity reporting incorrect on Solaris
> ---
>
> Key: HADOOP-6319
> URL: https://issues.apache.org/jira/browse/HADOOP-6319
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.1
>Reporter: Doug Judd
> Attachments: solaris-hadoop.patch
>
>
> When trying to get Hadoop up and running on Solaris on a ZFS filesystem, I 
> encountered a problem where the capacity reported was zero:
> Configured Capacity: 0 (0 KB)
> It looks like the problem is with the 'df' output:
> $ df -k /data/hadoop 
> Filesystem   1024-blocksUsed   Available Capacity  Mounted on
> /  0 71863542049027426%/
> The following patch (applied to trunk) fixes the problem.  Though the real 
> problem is with 'df', I suspect the patch is harmless enough to include?
> Index: src/java/org/apache/hadoop/fs/DF.java
> ===
> --- src/java/org/apache/hadoop/fs/DF.java (revision 826471)
> +++ src/java/org/apache/hadoop/fs/DF.java (working copy)
> @@ -181,7 +181,11 @@
>  this.percentUsed = Integer.parseInt(tokens.nextToken());
>  this.mount = tokens.nextToken();
>  break;
> -   }
> +}
> +
> +if (this.capacity == 0)
> + this.capacity = this.used + this.available;
> +
>}
>  
>public static void main(String[] args) throws Exception {



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6316) Add missing Junit tests for Hadoop-6314

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6316.
--

Resolution: Won't Fix

> Add missing Junit tests for Hadoop-6314
> ---
>
> Key: HADOOP-6316
> URL: https://issues.apache.org/jira/browse/HADOOP-6316
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Ravi Phulari
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6383) Add simulated data node cluster start/stop commands in hadoop-dameon.sh .

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6383.
--

Resolution: Duplicate

> Add simulated data node cluster start/stop commands in hadoop-dameon.sh .
> -
>
> Key: HADOOP-6383
> URL: https://issues.apache.org/jira/browse/HADOOP-6383
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ravi Phulari
>
> Currently there are no commands supported for starting or stopping simulated 
> data node clusters.
> To start simulated data node cluster we need to export extra class paths 
> required for DataNodeCluster.
>  
> {noformat}
> bin/hadoop-daemon.sh start org.apache.hadoop.hdfs.DataNodeCluster  -simulated 
> -n $DATANODE_PER_HOST -inject $STARTING_BLOCK_ID $BLOCKS_PER_DN  
> {noformat}
> {noformat}
> bin/hadoop-daemon.sh stop org.apache.hadoop.hdfs.DataNodeCluster  -simulated  
> {noformat}
> For better user interface we should add DataNodeCluster start stop option in 
> hadoop-daemon.sh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6302) Move FileSystem and all of the implementations to HDFS project

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6302.
--

Resolution: Won't Fix

> Move FileSystem and all of the implementations to HDFS project
> --
>
> Key: HADOOP-6302
> URL: https://issues.apache.org/jira/browse/HADOOP-6302
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>
> Currently, the FileSystem and FileContext classes are in Common and the 
> primary implementation is in HDFS. That means that many patches span between 
> the subprojects. I think it will reduce the pain if we move FileSystem and 
> the dependent classes into HDFS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6306) ant testcase target should run quickly

2014-07-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6306.
--

Resolution: Won't Fix

> ant testcase target should run quickly 
> ---
>
> Key: HADOOP-6306
> URL: https://issues.apache.org/jira/browse/HADOOP-6306
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Eli Collins
>
> ant -Dtestcase used to execute in a couple of seconds, now running {{ant 
> -Doffline=true -Dtestcase=TestConfiguration test-core}} takes almost 20 
> seconds. Most of the overhead seems to be due to ivy, but it also tries to 
> compile.  Changing {{test-core}} to {{test}} doubles the execution time. It 
> would be great to have an ant target that just executes the given test, ie is 
> as fast as running the unit test from eclipse. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10899) Hadoop CommandsManual.vm documentation gives deprecated information

2014-07-29 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-10899:
-

 Summary: Hadoop CommandsManual.vm documentation gives deprecated 
information
 Key: HADOOP-10899
 URL: https://issues.apache.org/jira/browse/HADOOP-10899
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Allen Wittenauer


This is a rollup of several old JIRAs.

The CommandsManual lists very old information about running HDFS and MapReduce 
subcommands from the 'hadoop' shell CLI.  These are deprecated and should be 
removed.  If necessary, the commands should be added to the relevant 
subproject's documentation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Code guidelines and bash

2014-07-29 Thread Colin McCabe
On Tue, Jul 29, 2014 at 2:45 AM, 俊平堵  wrote:
> Sun's java code convention (published in year of 97) suggest 80 column per
> line for old-style terminals. It sounds pretty old, However, I saw some
> developers (not me :)) like to open multiple terminals in one screen for
> coding/debugging so 80-colum could be just fit. Google's java convention (
> https://google-styleguide.googlecode.com/svn/trunk/javaguide.html#s4.4-column-limit)
> shows some flexibility here with 80 or 100 column (and some exception
> cases).
> Like Chris mentioned early, I think this 80-column should just be a general
> guideline but not a strict limit - we can break it if it hurts legibility
> of code reading.
> btw, some research work found that CPL (characters per line) only had small
> effects on readability for news, including factors of speed and
> comprehension (
> http://psychology.wichita.edu/surl/usabilitynews/72/LineLength.asp). Not
> sure if reading code is the same (assume break lines properly).

There is a lot of contradictory research in this area.  For example,
http://eric.ed.gov/?id=EJ749012 talks about 70 characters per line as
"ideal."

I think a lot of these studies don't really translate very well to
code.  (A lot of them are college students seeing how quickly they can
read a news article.)  Code with extremely long line lengths tends to
have super-deep nesting, which makes it hard to keep track of what is
going on (the so-called "arrow anti-pattern").  This is especially
true when there are break and continue statements involved.
Super-long lines make diffs very difficult to do.  And it's just
unpleasant to read, forcing users to choose between horizontal
scrolling or tiny text...

Maybe it makes sense to extend the bash line length, though, if it's
tough to fit in 80 chars.  Bash is whitespace sensitive and doing the
line continuation thing is a pain.  Another option might be renaming
some variables, or using temp variables with shorter names...

best,
Colin


>
>
> 2014-07-29 15:24 GMT+08:00 Andrew Purtell :
>
>> On Mon, Jul 28, 2014 at 12:05 PM, Doug Cutting  wrote:
>>
>> > On Sun, Jul 27, 2014 at 9:28 PM, Ted Dunning 
>> > wrote:
>> > > I don't know of any dev environments in common use today that can't
>> > display >100 characters.
>> >
>> > I edit in an 80-column Emacs window that just fits beside an 80-column
>> > shell window on a portrait-rotated 24" monitor.
>> >
>>
>> You win the Internet today, Old School category! (smile)
>>
>>
>> --
>> Best regards,
>>
>>- Andy
>>
>> Problems worthy of attack prove their worth by hitting back. - Piet Hein
>> (via Tom White)
>>


Re: Jenkins problem or patch problem?

2014-07-29 Thread Andrew Wang
We could change test-patch to use "git apply" instead of the patch command.
I know a lot of us use git apply when committing, so it seems like a safe
change.


On Tue, Jul 29, 2014 at 1:44 AM, Niels Basjes  wrote:

> I think this behavior is better.
> This way you know you patch was not (fully) applied.
>
> It would be even better if there was a way to submit a patch with a binary
> file in there.
>
> Niels
>
>
> On Mon, Jul 28, 2014 at 11:29 PM, Andrew Wang 
> wrote:
>
> > I had the same issue on HDFS-6696, patch generated with "git diff
> > --binary". I ended up making the same patch without the binary part and
> it
> > could be applied okay.
> >
> > This does differ in behavior from the old boxes, which were still able to
> > apply the non-binary parts of a binary-diff.
> >
> >
> > On Mon, Jul 28, 2014 at 3:06 AM, Niels Basjes  wrote:
> >
> > > For my test case I needed a something.txt.gz file
> > > However for this specific test this file will never be actually read,
> it
> > > just has to be there and it must be a few bytes in size.
> > > Because binary files do't work I simply created a file containging
> "Hello
> > > world"
> > > Now this isn't a gzip file at all, yet for my test it does enough to
> make
> > > the test work as intended.
> > >
> > > So in fact I didn't solve the binary attachment problem at all.
> > >
> > >
> > > On Mon, Jul 28, 2014 at 1:40 AM, Ted Yu  wrote:
> > >
> > > > Mind telling us how you included the binary file in your svn patch ?
> > > >
> > > > Thanks
> > > >
> > > >
> > > > On Sun, Jul 27, 2014 at 12:27 PM, Niels Basjes 
> > wrote:
> > > >
> > > > > I created a patch file with SVN and it works now.
> > > > > I dare to ask: Are there any git created patch files that work?
> > > > >
> > > > >
> > > > > On Sun, Jul 27, 2014 at 9:44 PM, Niels Basjes 
> > wrote:
> > > > >
> > > > > > I'll look for a workaround regarding the binary file. Thanks.
> > > > > >
> > > > > >
> > > > > > On Sun, Jul 27, 2014 at 9:07 PM, Ted Yu 
> > wrote:
> > > > > >
> > > > > >> Similar problem has been observed for HBase patches.
> > > > > >>
> > > > > >> Have you tried attaching level 1 patch ?
> > > > > >> For the binary file, to my knowledge, 'git apply' is able to
> > handle
> > > it
> > > > > but
> > > > > >> hadoop is currently using svn.
> > > > > >>
> > > > > >> Cheers
> > > > > >>
> > > > > >>
> > > > > >> On Sun, Jul 27, 2014 at 11:01 AM, Niels Basjes  >
> > > > wrote:
> > > > > >>
> > > > > >> > Hi,
> > > > > >> >
> > > > > >> > I just submitted a patch and Jenkins said it failed to apply
> the
> > > > > patch.
> > > > > >> > But when I look at the console output
> > > > > >> >
> > > > > >> >
> > > >
> https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4771//console
> > > > > >> >
> > > > > >> > it says:
> > > > > >> >
> > > > > >> > At revision 1613826.
> > > > > >> > MAPREDUCE-2094 patch is being downloaded at Sun Jul 27
> 18:50:44
> > > UTC
> > > > > >> > 2014 fromhttp://
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> issues.apache.org/jira/secure/attachment/12658034/MAPREDUCE-2094-20140727.patch*cp
> > > > > >> > :
> > > > > >> > cannot stat '/home/jenkins/buildSupport/lib/*': No such file
> or
> > > > > >> > directory
> > > > > >> > *The patch does not appear to apply with p0 to p2
> > > > > >> > PATCH APPLICATION FAILED
> > > > > >> >
> > > > > >> >
> > > > > >> > Now I do have a binary file (for the unit test) in this patch,
> > > > > perhaps I
> > > > > >> > did something wrong? Or is this problem caused by the error I
> > > > > >> highlighted?
> > > > > >> >
> > > > > >> > What can I do to fix this?
> > > > > >> >
> > > > > >> > --
> > > > > >> > Best regards / Met vriendelijke groeten,
> > > > > >> >
> > > > > >> > Niels Basjes
> > > > > >> >
> > > > > >>
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Best regards / Met vriendelijke groeten,
> > > > > >
> > > > > > Niels Basjes
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best regards / Met vriendelijke groeten,
> > > > >
> > > > > Niels Basjes
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > Best regards / Met vriendelijke groeten,
> > >
> > > Niels Basjes
> > >
> >
>
>
>
> --
> Best regards / Met vriendelijke groeten,
>
> Niels Basjes
>


Re: Hadoop Code of Incompatible Changes

2014-07-29 Thread Sandy Ryza
Hi Zhijie,

The Hadoop compatibility guide mentions this as "semantic compatibility":
http://wiki.apache.org/hadoop/Compatibility

My interpretation of the section is that we can't change the behavior of
public APIs unless we're fixing buggy behavior.  If the change could break
an existing application that's behaving reasonably with respect to the old
API, it's an incompatible change.

-Sandy



On Tue, Jul 29, 2014 at 9:26 AM, Zhijie Shen  wrote:

> Hi folks,
>
> Recently we have a conversation on YARN-2209 about the incompatible changes
> over releases. For those API changes that will break binary compatibility,
> source compatibility towards the existing API users, we've already had a
> rather clear picture about what we should do. However, YARN-2209 has
> introduced another case which I'm not quite sure about, which is kind of
> *logic
> incompatibility*.
>
> In detail, an ApplicationMasterProtocol API is going to throw an exception
> which is not expected before. The exception is a sub-class of
> YarnException, such that it doesn't need any method signature change, and
> won't break any binary/source compatibility. However, the exception is not
> expected before, but needs to be treated specially at the AM side. Not
> being aware of the newly introduced exception, the existing YARN
> applications' AM may not handle it exception properly, and is at the risk
> of being broken on a new YARN cluster after this change.
>
> An additional thought around this problem is that the change of what
> exception is to throw under what situation may be considered as a *soft
> *method
> signature change, because we're supposed to write this javadoc to tell the
> users (though we didn't it well in Hadoop), and users refer to it to guide
> how to handle the exception.
>
> In a more generalized form, let's assume we have a method, which behaves as
> A, in release 1.0. However, in release 2.0, the method signature has kept
> the same, while its behavior is altered from A to B. A and B are different
> behaviors. In this case, do we consider it as an incompatible change?
>
> I think it's somewhat a common issue, such that I raise it on the mailing
> list. Please share your ideas.
>
> Thanks,
> Zhijie
>
> --
> Zhijie Shen
> Hortonworks Inc.
> http://hortonworks.com/
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>


Hadoop Code of Incompatible Changes

2014-07-29 Thread Zhijie Shen
Hi folks,

Recently we have a conversation on YARN-2209 about the incompatible changes
over releases. For those API changes that will break binary compatibility,
source compatibility towards the existing API users, we've already had a
rather clear picture about what we should do. However, YARN-2209 has
introduced another case which I'm not quite sure about, which is kind of *logic
incompatibility*.

In detail, an ApplicationMasterProtocol API is going to throw an exception
which is not expected before. The exception is a sub-class of
YarnException, such that it doesn't need any method signature change, and
won't break any binary/source compatibility. However, the exception is not
expected before, but needs to be treated specially at the AM side. Not
being aware of the newly introduced exception, the existing YARN
applications' AM may not handle it exception properly, and is at the risk
of being broken on a new YARN cluster after this change.

An additional thought around this problem is that the change of what
exception is to throw under what situation may be considered as a *soft *method
signature change, because we're supposed to write this javadoc to tell the
users (though we didn't it well in Hadoop), and users refer to it to guide
how to handle the exception.

In a more generalized form, let's assume we have a method, which behaves as
A, in release 1.0. However, in release 2.0, the method signature has kept
the same, while its behavior is altered from A to B. A and B are different
behaviors. In this case, do we consider it as an incompatible change?

I think it's somewhat a common issue, such that I raise it on the mailing
list. Please share your ideas.

Thanks,
Zhijie

-- 
Zhijie Shen
Hortonworks Inc.
http://hortonworks.com/

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Code guidelines and bash

2014-07-29 Thread 俊平堵
Sun's java code convention (published in year of 97) suggest 80 column per
line for old-style terminals. It sounds pretty old, However, I saw some
developers (not me :)) like to open multiple terminals in one screen for
coding/debugging so 80-colum could be just fit. Google's java convention (
https://google-styleguide.googlecode.com/svn/trunk/javaguide.html#s4.4-column-limit)
shows some flexibility here with 80 or 100 column (and some exception
cases).
Like Chris mentioned early, I think this 80-column should just be a general
guideline but not a strict limit - we can break it if it hurts legibility
of code reading.
btw, some research work found that CPL (characters per line) only had small
effects on readability for news, including factors of speed and
comprehension (
http://psychology.wichita.edu/surl/usabilitynews/72/LineLength.asp). Not
sure if reading code is the same (assume break lines properly).


2014-07-29 15:24 GMT+08:00 Andrew Purtell :

> On Mon, Jul 28, 2014 at 12:05 PM, Doug Cutting  wrote:
>
> > On Sun, Jul 27, 2014 at 9:28 PM, Ted Dunning 
> > wrote:
> > > I don't know of any dev environments in common use today that can't
> > display >100 characters.
> >
> > I edit in an 80-column Emacs window that just fits beside an 80-column
> > shell window on a portrait-rotated 24" monitor.
> >
>
> ​You win the Internet today, Old School category! (smile)​
>
>
> --
> Best regards,
>
>- Andy
>
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)
>


Build failed in Jenkins: Hadoop-Common-trunk #1190

2014-07-29 Thread Apache Jenkins Server
See 

Changes:

[arp] HADOOP-8069. Enable TCP_NODELAY by default for IPC. (Contributed by Todd 
Lipcon)

[wang] HADOOP-10876. The constructor of Path should not take an empty URL as a 
parameter. Contributed by Zhihai Xu.

[szetszwo] HDFS-6739. Add getDatanodeStorageReport to ClientProtocol.

[brandonli] HDFS-6717. JIRA HDFS-5804 breaks default nfs-gateway behavior for 
unsecured config. Contributed by Brandon Li

--
[...truncated 75251 lines...]
Setting project property: test.exclude.pattern -> _
Setting project property: zookeeper.version -> 3.4.6
Setting project property: hadoop.assemblies.version -> 3.0.0-SNAPSHOT
Setting project property: test.exclude -> _
Setting project property: distMgmtSnapshotsId -> apache.snapshots.https
Setting project property: project.build.sourceEncoding -> UTF-8
Setting project property: java.security.egd -> file:///dev/urandom
Setting project property: distMgmtSnapshotsUrl -> 
https://repository.apache.org/content/repositories/snapshots
Setting project property: distMgmtStagingUrl -> 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: avro.version -> 1.7.4
Setting project property: test.build.data -> 

Setting project property: commons-daemon.version -> 1.0.13
Setting project property: hadoop.common.build.dir -> 

Setting project property: testsThreadCount -> 4
Setting project property: maven.test.redirectTestOutputToFile -> true
Setting project property: jdiff.version -> 1.0.9
Setting project property: project.reporting.outputEncoding -> UTF-8
Setting project property: distMgmtStagingName -> Apache Release Distribution 
Repository
Setting project property: build.platform -> Linux-amd64-64
Setting project property: protobuf.version -> 2.5.0
Setting project property: failIfNoTests -> false
Setting project property: protoc.path -> ${env.HADOOP_PROTOC_PATH}
Setting project property: jersey.version -> 1.9
Setting project property: distMgmtStagingId -> apache.staging.https
Setting project property: distMgmtSnapshotsName -> Apache Development Snapshot 
Repository
Setting project property: ant.file -> 

[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId -> org.apache.hadoop
Setting project property: project.artifactId -> hadoop-common-project
Setting project property: project.name -> Apache Hadoop Common Project
Setting project property: project.description -> Apache Hadoop Common Project
Setting project property: project.version -> 3.0.0-SNAPSHOT
Setting project property: project.packaging -> pom
Setting project property: project.build.directory -> 

Setting project property: project.build.outputDirectory -> 

Setting project property: project.build.testOutputDirectory -> 

Setting project property: project.build.sourceDirectory -> 

Setting project property: project.build.testSourceDirectory -> 

Setting project property: localRepository ->id: local
  url: file:///home/jenkins/.m2/repository/
   layout: default
snapshots: [enabled => true, update => always]
 releases: [enabled => true, update => always]
Setting project property: settings.localRepository -> 
/home/jenkins/.m2/repository
Setting project property: maven.project.dependencies.versions -> 
[INFO] Executing tasks
Build sequence for target(s) `main' is [main]
Complete build sequence is [main, ]

main:
[mkdir] Created dir: 

[mkdir] Skipping 

 because it already exists.
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-common-project ---
[DEBUG] Configuring mojo 
org.apache.maven.plugins:maven-source-plugin:2.1.2:jar-no-fork from plugin 
realm ClassRealm[plugin>org.apache.maven.plugins:maven-source-plugin:2.1.2, 
parent: sun.misc.Launcher$AppClassLoader@53004901]
[DEBUG] Configuring mojo 
'org.apache.maven.plugins:maven-source-plugin:2.1.2:jar-no-fork' with basic 
configurator -->
[DEBUG]  

Build failed in Jenkins: Hadoop-Common-0.23-Build #1025

2014-07-29 Thread Apache Jenkins Server
See 

--
[...truncated 8263 lines...]
Running org.apache.hadoop.fs.TestFileSystemTokens
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.512 sec
Running org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.712 sec
Running org.apache.hadoop.fs.TestLocalFSFileContextSymlink
Tests run: 61, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 2.392 sec
Running org.apache.hadoop.fs.TestHarFileSystem
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.313 sec
Running org.apache.hadoop.fs.TestFcLocalFsUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.572 sec
Running org.apache.hadoop.fs.TestLocalDirAllocator
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.986 sec
Running org.apache.hadoop.fs.TestLocalFileSystemPermission
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.517 sec
Running org.apache.hadoop.fs.TestFileSystemCaching
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.885 sec
Running org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.791 sec
Running org.apache.hadoop.fs.TestPath
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.804 sec
Running org.apache.hadoop.fs.TestListFiles
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.543 sec
Running org.apache.hadoop.fs.TestHarFileSystemBasics
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.273 sec
Running org.apache.hadoop.fs.TestChecksumFileSystem
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.825 sec
Running org.apache.hadoop.fs.TestGetFileBlockLocations
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.748 sec
Running org.apache.hadoop.fs.TestFsShellCopy
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.213 sec
Running org.apache.hadoop.fs.TestDU
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.161 sec
Running org.apache.hadoop.fs.TestAvroFSInput
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.481 sec
Running org.apache.hadoop.fs.shell.TestPathExceptions
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.106 sec
Running org.apache.hadoop.fs.shell.TestCommandFactory
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.178 sec
Running org.apache.hadoop.fs.shell.TestPathData
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.757 sec
Running org.apache.hadoop.fs.shell.TestCopy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.726 sec
Running org.apache.hadoop.fs.TestHardLink
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.305 sec
Running org.apache.hadoop.fs.TestFilterFileSystem
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.608 sec
Running org.apache.hadoop.fs.TestLocalFSFileContextMainOperations
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.118 sec
Running org.apache.hadoop.fs.TestTrash
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.037 sec
Running org.apache.hadoop.fs.viewfs.TestChRootedFileSystem
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.192 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemDelegation
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.462 sec
Running org.apache.hadoop.fs.viewfs.TestFcMainOperationsLocalFs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.548 sec
Running org.apache.hadoop.fs.viewfs.TestFcCreateMkdirLocalFs
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.061 sec
Running 
org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.059 sec
Running org.apache.hadoop.fs.viewfs.TestChRootedFs
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.022 sec
Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.183 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsTrash
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.943 sec
Running org.apache.hadoop.fs.viewfs.TestViewfsFileStatus
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.758 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.651 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.772 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.403 sec
Running org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0,

[jira] [Created] (HADOOP-10898) MapWritable instances cannot be reused when containing different custom Writable classes.

2014-07-29 Thread Eli Acherkan (JIRA)
Eli Acherkan created HADOOP-10898:
-

 Summary: MapWritable instances cannot be reused when containing 
different custom Writable classes.
 Key: HADOOP-10898
 URL: https://issues.apache.org/jira/browse/HADOOP-10898
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: Eli Acherkan
Priority: Minor


When a data stream contains several MapWritable instances, which contain 
instances of several different custom classes (implementing Writable), 
attempting to reuse a single MapWritable instance for reading the data stream 
results in an IllegalArgumentException. This happens because 
AbstractMapWritable.readFields doesn't reset the classToIdMap/idToClassMap data 
structures.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Jenkins problem or patch problem?

2014-07-29 Thread Niels Basjes
I think this behavior is better.
This way you know you patch was not (fully) applied.

It would be even better if there was a way to submit a patch with a binary
file in there.

Niels


On Mon, Jul 28, 2014 at 11:29 PM, Andrew Wang 
wrote:

> I had the same issue on HDFS-6696, patch generated with "git diff
> --binary". I ended up making the same patch without the binary part and it
> could be applied okay.
>
> This does differ in behavior from the old boxes, which were still able to
> apply the non-binary parts of a binary-diff.
>
>
> On Mon, Jul 28, 2014 at 3:06 AM, Niels Basjes  wrote:
>
> > For my test case I needed a something.txt.gz file
> > However for this specific test this file will never be actually read, it
> > just has to be there and it must be a few bytes in size.
> > Because binary files do't work I simply created a file containging "Hello
> > world"
> > Now this isn't a gzip file at all, yet for my test it does enough to make
> > the test work as intended.
> >
> > So in fact I didn't solve the binary attachment problem at all.
> >
> >
> > On Mon, Jul 28, 2014 at 1:40 AM, Ted Yu  wrote:
> >
> > > Mind telling us how you included the binary file in your svn patch ?
> > >
> > > Thanks
> > >
> > >
> > > On Sun, Jul 27, 2014 at 12:27 PM, Niels Basjes 
> wrote:
> > >
> > > > I created a patch file with SVN and it works now.
> > > > I dare to ask: Are there any git created patch files that work?
> > > >
> > > >
> > > > On Sun, Jul 27, 2014 at 9:44 PM, Niels Basjes 
> wrote:
> > > >
> > > > > I'll look for a workaround regarding the binary file. Thanks.
> > > > >
> > > > >
> > > > > On Sun, Jul 27, 2014 at 9:07 PM, Ted Yu 
> wrote:
> > > > >
> > > > >> Similar problem has been observed for HBase patches.
> > > > >>
> > > > >> Have you tried attaching level 1 patch ?
> > > > >> For the binary file, to my knowledge, 'git apply' is able to
> handle
> > it
> > > > but
> > > > >> hadoop is currently using svn.
> > > > >>
> > > > >> Cheers
> > > > >>
> > > > >>
> > > > >> On Sun, Jul 27, 2014 at 11:01 AM, Niels Basjes 
> > > wrote:
> > > > >>
> > > > >> > Hi,
> > > > >> >
> > > > >> > I just submitted a patch and Jenkins said it failed to apply the
> > > > patch.
> > > > >> > But when I look at the console output
> > > > >> >
> > > > >> >
> > > https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4771//console
> > > > >> >
> > > > >> > it says:
> > > > >> >
> > > > >> > At revision 1613826.
> > > > >> > MAPREDUCE-2094 patch is being downloaded at Sun Jul 27 18:50:44
> > UTC
> > > > >> > 2014 fromhttp://
> > > > >> >
> > > > >>
> > > >
> > >
> >
> issues.apache.org/jira/secure/attachment/12658034/MAPREDUCE-2094-20140727.patch*cp
> > > > >> > :
> > > > >> > cannot stat '/home/jenkins/buildSupport/lib/*': No such file or
> > > > >> > directory
> > > > >> > *The patch does not appear to apply with p0 to p2
> > > > >> > PATCH APPLICATION FAILED
> > > > >> >
> > > > >> >
> > > > >> > Now I do have a binary file (for the unit test) in this patch,
> > > > perhaps I
> > > > >> > did something wrong? Or is this problem caused by the error I
> > > > >> highlighted?
> > > > >> >
> > > > >> > What can I do to fix this?
> > > > >> >
> > > > >> > --
> > > > >> > Best regards / Met vriendelijke groeten,
> > > > >> >
> > > > >> > Niels Basjes
> > > > >> >
> > > > >>
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best regards / Met vriendelijke groeten,
> > > > >
> > > > > Niels Basjes
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Best regards / Met vriendelijke groeten,
> > > >
> > > > Niels Basjes
> > > >
> > >
> >
> >
> >
> > --
> > Best regards / Met vriendelijke groeten,
> >
> > Niels Basjes
> >
>



-- 
Best regards / Met vriendelijke groeten,

Niels Basjes


Re: Code guidelines and bash

2014-07-29 Thread Andrew Purtell
On Mon, Jul 28, 2014 at 12:05 PM, Doug Cutting  wrote:

> On Sun, Jul 27, 2014 at 9:28 PM, Ted Dunning 
> wrote:
> > I don't know of any dev environments in common use today that can't
> display >100 characters.
>
> I edit in an 80-column Emacs window that just fits beside an 80-column
> shell window on a portrait-rotated 24" monitor.
>

​You win the Internet today, Old School category! (smile)​


-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)