Re: ${!var} In Scripts

2019-03-04 Thread Allen Wittenauer



> On Mar 4, 2019, at 10:00 AM, Daniel Templeton  wrote:
> 
> Do you want to file a JIRA for it, or shall I?

Given I haven’t done any Hadoop work in months and months …



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: ${!var} In Scripts

2019-03-04 Thread Allen Wittenauer



> On Mar 4, 2019, at 9:33 AM, Daniel Templeton  wrote:
> 
> Thanks!  That's not even close to what the docs suggest it does--no idea 
> what's up with that.

It does. Here’s the paragraph:

"If the first character of parameter is an exclamation point (!), a level of 
variable indirection is introduced. Bash uses the value of the variable formed 
from the rest of parameter as the name of the variable; this variable is then 
expanded and that value is used in the rest of the substitution, rather than 
the value of parameter itself. This is known as indirect expansion. The 
exceptions to this are the expansions of ${!prefix*} and ${!name[@]} described 
below. The exclamation point must immediately follow the left brace in order to 
introduce indirection.”

There’s a whole section on bash indirect references in the ABS as well. 
(Although I think most of the examples there still use \$$foo syntax with a 
note that it was replaced with ${!foo} syntax. lol.)

For those playing at home, the hadoop shell code uses them almost 
entirely for utility functions in order to reduce the amount of code that would 
be needed to processes the ridiculous amount of duplicated env vars (e.g., 
HADOOP_HOME vs. HDFS_HOME vs YARN_HOME vs …).

> This issue only shows up if the user uses the hadoop command to run an 
> arbitrary class not in the default package, e.g. "hadoop 
> org.apache.hadoop.conf.Configuration".  We've been quietly allowing that 
> misuse forever.  Unfortunately, treating CLI output as an API means we can't 
> change that behavior in a minor.  We could, however, deprecate it and add a 
> warning when it's used.  I think that would cover us sufficiently if someone 
> trips on the Ubuntu 18 regression.
> 
> Thoughts?

Oh, I think I see the bug.  HADOOP_SUBCMD (and equivalents in yarn, 
hdfs, etc) just needs some special handling when a custom method is being 
called.  For example, there’s no point in checking to see if it should run with 
privileges, so just skip over that.  Probably a few other places too.  
Relatively easy fix.  2 lines of code, maybe.


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: ${!var} In Scripts

2019-03-01 Thread Allen Wittenauer



> On Mar 1, 2019, at 3:04 PM, Daniel Templeton  wrote:
> 
> There are a bunch of uses of the bash syntax, "${!var}", in the Hadoop 
> scripts.  Can anyone explain to me what that syntax was supposed to achieve? 


#!/usr/bin/env bash

j="hi"
m="bye"
k=j
echo ${!k}
k=m
echo ${!k}
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16035) Jenkinsfile for Hadoop

2019-02-21 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-16035.
---
   Resolution: Fixed
Fix Version/s: 3.3.0

> Jenkinsfile for Hadoop
> --
>
> Key: HADOOP-16035
> URL: https://issues.apache.org/jira/browse/HADOOP-16035
> Project: Hadoop Common
>  Issue Type: Improvement
>    Reporter: Allen Wittenauer
>    Assignee: Allen Wittenauer
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16035.00.patch, HADOOP-16035.01.patch
>
>
> In order to enable Github Branch Source plugin on Jenkins to test Github PRs 
> with Apache Yetus:
> - an account that can read Github
> - Apache Yetus 0.9.0+
> - a Jenkinsfile that uses the above



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16035) test yetus master

2019-01-08 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-16035:
-

 Summary: test yetus master
 Key: HADOOP-16035
 URL: https://issues.apache.org/jira/browse/HADOOP-16035
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer


Just a JIRA for me to test yetus github PR support against hadoop



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11688) Implement a native library version of IdMappingServiceProvider

2018-09-02 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11688.
---
Resolution: Won't Fix

> Implement a native library version of IdMappingServiceProvider
> --
>
> Key: HADOOP-11688
> URL: https://issues.apache.org/jira/browse/HADOOP-11688
> Project: Hadoop Common
>  Issue Type: New Feature
>    Reporter: Allen Wittenauer
>Priority: Major
>
> There should be a native library version of a IdMappingServiceProvider, 
> especially since most of the requested functionality already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9522) web interfaces are not logged until after opening

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-9522.
--
Resolution: Won't Fix

> web interfaces are not logged until after opening
> -
>
> Key: HADOOP-9522
> URL: https://issues.apache.org/jira/browse/HADOOP-9522
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.4-alpha
>    Reporter: Allen Wittenauer
>Priority: Major
>
> If one mis-configures certain interfaces (in my case 
> yarn.resourcemanager.webapp.address), neither Hadoop nor jetty throws any 
> errors that the interface doesn't exist. Worse yet, the system appears to be 
> hung. It would be better if we logged what hostname:port we were attempting 
> to open before we opened it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-10878) Hadoop servlets need ACLs

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-10878.
---
Resolution: Won't Fix

> Hadoop servlets need ACLs
> -
>
> Key: HADOOP-10878
> URL: https://issues.apache.org/jira/browse/HADOOP-10878
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics, security
>    Reporter: Allen Wittenauer
>Priority: Major
>  Labels: newbie
>
> As far as I'm aware, once a user gets past the HTTP-level authentication, all 
> servlets available on that port are available to the user.  This is a 
> security hole as there is some information and services that we don't want 
> every user to be able to access or only want them to access from certain 
> locations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9874) hadoop.security.logger output goes to both logs

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-9874.
--
Resolution: Won't Fix

> hadoop.security.logger output goes to both logs
> ---
>
> Key: HADOOP-9874
> URL: https://issues.apache.org/jira/browse/HADOOP-9874
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>    Reporter: Allen Wittenauer
>Priority: Major
>
> Setting hadoop.security.logger (for SecurityLogger messages) to non-null 
> sends authentication information to the other log as specified.  However, 
> that logging information also goes to the main log.   It should only go to 
> one log, not both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11027) HADOOP_SECURE_COMMAND catch-all

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11027.
---
Resolution: Won't Fix

> HADOOP_SECURE_COMMAND catch-all
> ---
>
> Key: HADOOP-11027
> URL: https://issues.apache.org/jira/browse/HADOOP-11027
> Project: Hadoop Common
>  Issue Type: Bug
>    Reporter: Allen Wittenauer
>Assignee: Andras Bokor
>Priority: Minor
>
> Enabling HADOOP_SECURE_COMMAND to override jsvc doesn't work.  Here's a list 
> of issues!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11131) getUsersForNetgroupCommand doesn't work for OS X

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11131.
---
Resolution: Won't Fix

> getUsersForNetgroupCommand doesn't work for OS X
> 
>
> Key: HADOOP-11131
> URL: https://issues.apache.org/jira/browse/HADOOP-11131
> Project: Hadoop Common
>  Issue Type: Bug
>    Reporter: Allen Wittenauer
>Priority: Major
>
> Apple doesn't ship getent, which this command assumes.  We should use dscl 
> instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11137) put up guard rails around pid and log file handling

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11137.
---
Resolution: Won't Fix

> put up guard rails around pid and log file handling
> ---
>
> Key: HADOOP-11137
> URL: https://issues.apache.org/jira/browse/HADOOP-11137
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts, security
>    Reporter: Allen Wittenauer
>Priority: Major
>  Labels: newbie, scripts, security
>
> We should do a better job of protecting against symlink attacks in the pid 
> and log file handling code:
> a) Change the default location to have a user or id.str component
> b) Check to make sure a pid file is actually a pid file (single line, nothing 
> but numbers)
> ... maybe other stuff?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11121) native libraries guide is extremely out of date

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11121.
---
Resolution: Won't Fix

> native libraries guide is extremely out of date
> ---
>
> Key: HADOOP-11121
> URL: https://issues.apache.org/jira/browse/HADOOP-11121
> Project: Hadoop Common
>  Issue Type: Bug
>    Reporter: Allen Wittenauer
>Priority: Minor
>  Labels: documentation, newbie
>
> The native libraries guide says a few things that hasn't been true for a very 
> long time:
> * RHEL 4
> * autotools instead of cmake
> * The pre-built 32-bit i386-Linux native hadoop library is available as part 
> of the hadoop distribution and is located in the lib/native directory. 
> ... and probably more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11696) update compatibility documentation to reflect only API changes matter

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11696.
---
Resolution: Won't Fix

> update compatibility documentation to reflect only API changes matter
> -
>
> Key: HADOOP-11696
> URL: https://issues.apache.org/jira/browse/HADOOP-11696
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Allen Wittenauer
>Priority: Major
>
> Given the changes file generated by processing JIRA and current discussion in 
> common-dev, we should update the compatibility documents to reflect reality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11689) Code reduction: ShellBasedIdMapping vs. ShellBasedUnixGroupsMapping

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11689.
---
Resolution: Won't Fix

> Code reduction: ShellBasedIdMapping vs. ShellBasedUnixGroupsMapping
> ---
>
> Key: HADOOP-11689
> URL: https://issues.apache.org/jira/browse/HADOOP-11689
> Project: Hadoop Common
>  Issue Type: Improvement
>    Reporter: Allen Wittenauer
>Priority: Major
>
> There seems to be an opportunity at code reduction/simplification here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11690) Groups vs. IdMappingServiceProvider

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11690.
---
Resolution: Won't Fix

> Groups vs. IdMappingServiceProvider
> ---
>
> Key: HADOOP-11690
> URL: https://issues.apache.org/jira/browse/HADOOP-11690
> Project: Hadoop Common
>  Issue Type: Improvement
>    Reporter: Allen Wittenauer
>Priority: Major
>
> There appears to be an opportunity of code reduction by re-implementing 
> Groups to use the IdMappingServiceProvider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11964) determine-flaky-tests makes invalid test assumptions

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11964.
---
Resolution: Won't Fix

> determine-flaky-tests makes invalid test assumptions
> 
>
> Key: HADOOP-11964
> URL: https://issues.apache.org/jira/browse/HADOOP-11964
> Project: Hadoop Common
>  Issue Type: Test
>    Reporter: Allen Wittenauer
>Assignee: Yongjun Zhang
>Priority: Minor
>
> When running determine-flaky-tests against precommit-hadoop-build, it throws 
> a lot of errors because it assumes that every job is actually running Java 
> tests.  There should be some way to make it not do that or at least fix its 
> assumptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12389) allow self-impersonation

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12389.
---
  Resolution: Won't Fix
Target Version/s:   (was: )

> allow self-impersonation
> 
>
> Key: HADOOP-12389
> URL: https://issues.apache.org/jira/browse/HADOOP-12389
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Major
>
> This is kind of dumb:
> org.apache.hadoop.security.authorize.AuthorizationException: User: aw is not 
> allowed to impersonate aw
> Users should be able to impersonate themselves in secure and non-secure cases 
> automatically, for free.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12558) distcp documentation is woefully out of date

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12558.
---
Resolution: Won't Fix

> distcp documentation is woefully out of date
> 
>
> Key: HADOOP-12558
> URL: https://issues.apache.org/jira/browse/HADOOP-12558
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools/distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Major
>
> There are a ton of distcp tune-ables that have zero documentation outside of 
> the source code.  This should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12578) Update hadoop_env_checks.sh to track changes from boot2docker to docker-machine

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12578.
---
Resolution: Won't Fix

> Update hadoop_env_checks.sh to track changes from boot2docker to 
> docker-machine
> ---
>
> Key: HADOOP-12578
> URL: https://issues.apache.org/jira/browse/HADOOP-12578
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Allen Wittenauer
>Priority: Major
>
> boot2docker on OS X appears to be deprecated.  We should rewrite the 
> instructions, scripts, etc, to use docker-machine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12606) OS X native unit tests fail due to libjvm link errors

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12606.
---
Resolution: Won't Fix

> OS X native unit tests fail due to libjvm link errors
> -
>
> Key: HADOOP-12606
> URL: https://issues.apache.org/jira/browse/HADOOP-12606
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>    Reporter: Allen Wittenauer
>Priority: Blocker
>
> Some of the native code unit tests fail when linked against libhadoop.so due 
> to @rpath not being defined properly on OS X.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12650) Document all of the secret env vars

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12650.
---
Resolution: Won't Fix

> Document all of the secret env vars
> ---
>
> Key: HADOOP-12650
> URL: https://issues.apache.org/jira/browse/HADOOP-12650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> Over the years, developers have added all kinds of magical environment 
> variables in the Java code without any concern or thought about either a) 
> documenting them or b) whether they are already used by something else.  We 
> need to update at least hadoop-env.sh to contain a list of these env vars so 
> that end users know that they are either private/unsafe and/or how they can 
> be used.
> Just one of many examples: HADOOP_JAAS_DEBUG.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12865) hadoop-datajoin should be documented or dropped

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12865.
---
  Resolution: Won't Fix
Target Version/s:   (was: )

> hadoop-datajoin should be documented or dropped
> ---
>
> Key: HADOOP-12865
> URL: https://issues.apache.org/jira/browse/HADOOP-12865
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Minor
>
> hadoop-tools's datajoin is meant to be an example (I think), but it doesn't 
> actually appear to be documented anywhere so that people can see it or use it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12867) clean up how rumen and sls are executed

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12867.
---
  Resolution: Won't Fix
Target Version/s:   (was: )

> clean up how rumen and sls are executed
> ---
>
> Key: HADOOP-12867
> URL: https://issues.apache.org/jira/browse/HADOOP-12867
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Major
>
> sls and rumen commands are buried where no one can see them. this should be 
> fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13250) jdiff and dependency reports aren't linked in site web pages

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-13250.
---
Resolution: Won't Fix

> jdiff and dependency reports aren't linked in site web pages
> 
>
> Key: HADOOP-13250
> URL: https://issues.apache.org/jira/browse/HADOOP-13250
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Vinod Kumar Vavilapalli
>Priority: Major
>
> Even though they are in the site tar ball (after HADOOP-13245), they aren't 
> actually reachable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13495) create-release should allow a user-supplied Dockerfile

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-13495.
---
Resolution: Won't Fix

> create-release should allow a user-supplied Dockerfile
> --
>
> Key: HADOOP-13495
> URL: https://issues.apache.org/jira/browse/HADOOP-13495
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>    Reporter: Allen Wittenauer
>Priority: Major
>
> For non-ASF builds, it'd be handy to supply a custom Dockerfile to 
> create-release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13490) create-release should not fail rat checks unless --asfrelease is used

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-13490.
---
Resolution: Won't Fix

> create-release should not fail rat checks unless --asfrelease is used
> -
>
> Key: HADOOP-13490
> URL: https://issues.apache.org/jira/browse/HADOOP-13490
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>    Reporter: Allen Wittenauer
>Priority: Major
>
> If someone is using create-release to build where the ASF isn't the target 
> destination, there isn't much reason to fail out if RAT errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13957) prevent bad PATHs

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-13957.
---
Resolution: Won't Fix

> prevent bad PATHs
> -
>
> Key: HADOOP-13957
> URL: https://issues.apache.org/jira/browse/HADOOP-13957
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Major
>
> Apache Hadoop daemons should fail to start if the shell PATH contains world 
> writable directories or '.' (cwd).  Doing so would close an attack vector on 
> misconfigured systems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14011) fix the remaining shelldoc errors

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-14011.
---
Resolution: Won't Fix

> fix the remaining shelldoc errors
> -
>
> Key: HADOOP-14011
> URL: https://issues.apache.org/jira/browse/HADOOP-14011
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation, test
>    Reporter: Allen Wittenauer
>Priority: Major
>
> There aren't that many shelldoc errors, and many of them (e.g., 
> dev-support/bin/*) can be set to be ignored once Apache Yetus 0.4.0 is in use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14016) add license and notice verifaction to create-release

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-14016.
---
Resolution: Won't Fix

> add license and notice verifaction to create-release
> 
>
> Key: HADOOP-14016
> URL: https://issues.apache.org/jira/browse/HADOOP-14016
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>    Reporter: Allen Wittenauer
>Priority: Major
>
> We should add some trival license and notice jar verification to 
> create-release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14378) Upgrade to maven-site-plugin 3.6

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-14378.
---
Resolution: Won't Fix

> Upgrade to maven-site-plugin 3.6
> 
>
> Key: HADOOP-14378
> URL: https://issues.apache.org/jira/browse/HADOOP-14378
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha4
>Reporter: Allen Wittenauer
>Priority: Major
>
> We use releasedocmaker to generate changes and releasenotes from JIRA.  Some 
> of the characters used in JIRA are technically legal in markdown but for some 
> reason cause problems when passed through to maven-site-plugin 3.6.  
> Experiments have shown that maven-site-plugin seems to handle a lot of these 
> situations better.  Let's upgrade it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14664) TestCodec's sequenceFileCodecTest doesn't use correct directory

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-14664.
---
Resolution: Won't Fix

> TestCodec's sequenceFileCodecTest doesn't use correct directory
> ---
>
> Key: HADOOP-14664
> URL: https://issues.apache.org/jira/browse/HADOOP-14664
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Priority: Major
>
> While playing with OpenClover (HADOOP-14663), it doesn't appear that 
> sequenceFileCodecTest in TestCodec.java is properly setting the working 
> directory for the test files, resulting in test data running around in places 
> it shouldn't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15617) Reduce node.js and npm package loading in the Dockerfile

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-15617.
---
Resolution: Won't Fix

> Reduce node.js and npm package loading in the Dockerfile
> 
>
> Key: HADOOP-15617
> URL: https://issues.apache.org/jira/browse/HADOOP-15617
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>    Reporter: Allen Wittenauer
>Priority: Major
>
> {code}
> RUN apt-get -y install nodejs && \
> ln -s /usr/bin/nodejs /usr/bin/node && \
> apt-get -y install npm && \
> npm install npm@latest -g && \
> npm install -g bower && \
> npm install -g ember-cli
> {code}
> should get reduced to
> {code}
> RUN apt-get -y install nodejs && \
> ln -s /usr/bin/nodejs /usr/bin/node && \
> apt-get -y install npm && \
> npm install npm@latest -g
> {code}
> The locally installed versions of bower and ember-cli aren't being used 
> anymore.  Removing these cuts the docker build time significantly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15309) default maven in path under start-build-env.sh is the wrong one

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-15309.
---
Resolution: Won't Fix

> default maven in path under start-build-env.sh is the wrong one
> ---
>
> Key: HADOOP-15309
> URL: https://issues.apache.org/jira/browse/HADOOP-15309
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>    Reporter: Allen Wittenauer
>Priority: Trivial
>
> PATH points to /usr/bin/mvn, should be /opt/maven/bin/mvn



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14009) dev-support/bin should use hadoop-functions.sh

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-14009.
---
Resolution: Won't Fix

> dev-support/bin should use hadoop-functions.sh
> --
>
> Key: HADOOP-14009
> URL: https://issues.apache.org/jira/browse/HADOOP-14009
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>    Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Major
>
> We can dedupe some code by bouncing off of the in-tree hadoop-function.sh 
> versions of things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14224) add dnsutils to Dockerfile to aid in debugging maven repo failures

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-14224.
---
Resolution: Won't Fix

> add dnsutils to Dockerfile to aid in debugging maven repo failures
> --
>
> Key: HADOOP-14224
> URL: https://issues.apache.org/jira/browse/HADOOP-14224
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Allen Wittenauer
>    Assignee: Allen Wittenauer
>Priority: Trivial
>
> It would be useful to have dig to troubleshoot if the Docker-local resolver 
> is working correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14724) Get a daily QBT run for Windows

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-14724.
---
Resolution: Fixed

Was fixed for quite a while, now appears to be broken again. Oh well.

> Get a daily QBT run for Windows
> ---
>
> Key: HADOOP-14724
> URL: https://issues.apache.org/jira/browse/HADOOP-14724
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>    Assignee: Allen Wittenauer
>Priority: Major
>
> We used to have Windows as part of our testing infrastructure.  Let's get it 
> back up and running now that the ASF has more boxes (and who knows what the 
> status of the hadoop-win-1 box is)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Checkstyle shows false positive report

2018-08-15 Thread Allen Wittenauer


> On Aug 15, 2018, at 4:49 AM, Kitti Nánási  
> wrote:
> 
> Hi All,
> 
> We noticed that the checkstyle run by the pre commit job started to show
> false positive reports, so I created HADOOP-15665
> .
> 
> Until that is fixed, keep in mind to run the checkstyle by your IDE
> manually for the patches you upload or review.


I’ve tracked it down to HDDS-119.  I have no idea why that JIRA Is 
changing the checkstyle suppressions file, since the asf license check is it’s 
own thing and check style wouldn’t be looking at those files anyway.

That said, there is a bug in Yetus in that it should have reported that 
checkstyle failed to run. I’ve filed YETUS-660 for that.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Alpha Release of Ozone

2018-08-08 Thread Allen Wittenauer



> On Aug 8, 2018, at 12:56 PM, Anu Engineer  wrote:
> 
>> Has anyone verified that a Hadoop release doesn't have _any_ of the extra 
>> ozone bits that are sprinkled outside the maven modules?
> As far as I know that is the state, we have had multiple Hadoop releases 
> after ozone has been merged. So far no one has reported Ozone bits leaking 
> into Hadoop. If we find something like that, it would be a bug.

There hasn't been a release from a branch where Ozone has been merged 
yet. The first one will be 3.2.0.  Running create-release off of trunk 
presently shows bits of Ozone in dev-support, hadoop-dist, and elsewhere in the 
Hadoop source tar ball. 

So, consider this as a report. IMHO, cutting an Ozone release prior to 
a Hadoop release ill-advised given the distribution impact and the requirements 
of the merge vote.  
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Alpha Release of Ozone

2018-08-08 Thread Allen Wittenauer


Given that there are some Ozone components spread out past the core maven 
modules, is the plan to release a Hadoop Trunk + Ozone tar ball or is more work 
going to go into segregating the Ozone components prior to release? Has anyone 
verified that a Hadoop release doesn't have _any_ of the extra ozone bits that 
are sprinkled outside the maven modules?
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15653) Please add OWASP Dependency Check to the build (pom.xml)

2018-08-04 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-15653.
---
Resolution: Implemented

Closing this as implemented.

> Please add OWASP Dependency Check to the build (pom.xml)
> 
>
> Key: HADOOP-15653
> URL: https://issues.apache.org/jira/browse/HADOOP-15653
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0
> Environment: All development, build, test, environments.
>Reporter: Albert Baker
>Priority: Major
>  Labels: build, easy-fix, security
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>  Please add OWASP Dependency Check to the build (pom.xml).  OWASP DC makes an 
> outbound REST call to MITRE Common Vulnerabilities & Exposures (CVE) to 
> perform a lookup for each dependant .jar to list any/all known 
> vulnerabilities for each jar.  This step is needed because a manual MITRE CVE 
> lookup/check on the main component does not include checking for 
> vulnerabilities in components or in dependant libraries.
> OWASP Dependency check : 
> https://www.owasp.org/index.php/OWASP_Dependency_Check has plug-ins for most 
> Java build/make types (ant, maven, ivy, gradle).   
> Also, add the appropriate command to the nightly build to generate a report 
> of all known vulnerabilities in any/all third party libraries/dependencies 
> that get pulled in. example : mvn -Powasp -Dtest=false -DfailIfNoTests=false 
> clean aggregate
> Generating this report nightly/weekly will help inform the project's 
> development team if any dependant libraries have a reported known 
> vulnerailities.  Project teams that keep up with removing vulnerabilities on 
> a weekly basis will help protect businesses that rely on these open source 
> componets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15647) dependency checker test

2018-08-01 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-15647:
-

 Summary: dependency checker test
 Key: HADOOP-15647
 URL: https://issues.apache.org/jira/browse/HADOOP-15647
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15617) Reduce node.js and npm package loading in the Dockerfile

2018-07-18 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-15617:
-

 Summary: Reduce node.js and npm package loading in the Dockerfile
 Key: HADOOP-15617
 URL: https://issues.apache.org/jira/browse/HADOOP-15617
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Allen Wittenauer


{code}
RUN apt-get -y install nodejs && \
ln -s /usr/bin/nodejs /usr/bin/node && \
apt-get -y install npm && \
npm install npm@latest -g && \
npm install -g bower && \
npm install -g ember-cli
{code}

should get reduced to

{code}
RUN apt-get -y install nodejs && \
ln -s /usr/bin/nodejs /usr/bin/node && \
apt-get -y install npm && \
npm install npm@latest -g
{code}

The locally installed versions of bower and ember-cli aren't being used anymore.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-06-07 Thread Allen Wittenauer
> On Jun 7, 2018, at 11:47 AM, Steve Loughran  wrote:
> 
> Actually, Yongjun has been really good at helping me get set up for a 2.7.7 
> release, including "things you need to do to get GPG working in the docker 
> image”

*shrugs* I use a different release script after some changes broke the 
in-tree version for building on OS X and I couldn’t get the fixes committed 
upstream.  So not sure what the problems are that you are hitting.

> On Jun 7, 2018, at 1:08 PM, Nandakumar Vadivelu  
> wrote:
> 
> It will be helpful if we can get the correct steps, and also update the wiki.
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Release+Validation

Yup. Looking forward to seeing it. 
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-06-07 Thread Allen Wittenauer


> On Jun 7, 2018, at 3:46 AM, Lokesh Jain  wrote:
> 
> Hi Yongjun
> 
> I followed Nanda’s steps and I see the same issues as reported by Nanda.


This situation is looking like an excellent opportunity for PMC members to 
mentor people on how the build works since it’s apparent that three days later, 
no one has mentioned that those steps aren’t the ones to build the complete 
website and haven’t been since at least 2.4.



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2018-05-15 Thread Allen Wittenauer

> On May 15, 2018, at 10:16 AM, Chris Douglas  wrote:
> 
> They've been failing for a long time. It can't install bats, and
> that's fatal? -C


The bats error is new and causes the build to fail enough that it 
produces the email output.  For the past few months, it hasn’t been producing 
email output at all because the builds have been timing out.  (The last ‘good’ 
report was Feb 26.)  Since no one [*] is paying attention to them enough to 
notice, I figured it was better to free up the cycles for the rest of the ASF. 

* - I noticed a while back, but for various reasons I’ve mostly moved to only 
working on Hadoop things where I’m getting paid.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2018-05-15 Thread Allen Wittenauer


FYI:

I’m going to disable the branch-2 nightly jobs.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [NOTIFICATION] Hadoop trunk rebased

2018-04-27 Thread Allen Wittenauer

Did the patch that fixes the mountain of maven warnings get missed?

> On Apr 26, 2018, at 11:52 PM, Akira Ajisaka  wrote:
> 
> + common-dev and mapreduce-dev
> 
> On 2018/04/27 6:23, Owen O'Malley wrote:
>> As we discussed in hdfs-dev@hadoop, I did a force push to Hadoop's trunk to
>> replace the Ozone merge with a rebase.
>> That means that you'll need to rebase your branches.
>> .. Owen
> 
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Fwd: Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-03-15 Thread Allen Wittenauer

For my part of the HDFS bug bash, I’ve gotten the ASF Windows build 
working again. Starting tomorrow, results will be sent to the *-dev lists.

A few notes:

* It only runs the unit tests.  There’s not much point in running the other 
Yetus plugins since those are covered by the Linux one and this build is slow 
enough as it is.

* There are two types of ASF build nodes: Windows Server 2012 and Windows 
Server 2016. This job can run on both and will use whichever one has a free 
slot.

* It ALWAYS applies HADOOP-14667.05.patch prior to running.  As a result, this 
is only set up for trunk with no parameterization to run other branches.

* The URI handling for file paths in hadoop-common and elsewhere is pretty 
broken on Windows, so many many many unit tests are failing and I wouldn't be 
surprised if Windows hadoop installs are horked as a result.

* Runtime is about 12-13 hours with many tests taking significantly longer than 
their UNIX counterparts.  My guess is that this caused by winutils.  Changing 
from winutils to Java 7 API calls would get this more in line and be a 
significant performance boost for Windows clients/servers as well.

Have fun.

=

For more details, see https://builds.apache.org/job/hadoop-trunk-win/406/ 


[Mar 14, 2018 6:26:58 PM] (xyao) HDFS-13251. Avoid using hard coded datanode 
data dirs in unit tests.
[Mar 14, 2018 8:05:24 PM] (jlowe) MAPREDUCE-7064. Flaky test
[Mar 14, 2018 8:14:36 PM] (inigoiri) HDFS-13198. RBF: RouterHeartbeatService 
throws out CachedStateStore
[Mar 14, 2018 8:36:53 PM] (wangda) Revert "HADOOP-13707. If kerberos is enabled 
while HTTP SPNEGO is not
[Mar 14, 2018 10:47:56 PM] (fabbri) HADOOP-15278 log s3a at info. Contributed 
by Steve Loughran.




-1 overall


The following subsystems voted -1:
   unit


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
   unit


Specific tests:

   Failed CTEST tests :

  test_test_libhdfs_threaded_hdfs_static 

   Failed junit tests :

  hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
  hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
  hadoop.fs.TestFsShellCopy 
  hadoop.fs.TestFsShellList 
  hadoop.fs.TestLocalFileSystem 
  hadoop.http.TestHttpServer 
  hadoop.http.TestHttpServerLogs 
  hadoop.io.compress.TestCodec 
  hadoop.io.nativeio.TestNativeIO 
  hadoop.ipc.TestSocketFactory 
  hadoop.metrics2.impl.TestStatsDMetrics 
  hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal 
  hadoop.security.TestSecurityUtil 
  hadoop.security.TestShellBasedUnixGroupsMapping 
  hadoop.security.token.TestDtUtilShell 
  hadoop.util.TestNativeCodeLoader 
  hadoop.fs.TestWebHdfsFileContextMainOperations 
  hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy 
  hadoop.hdfs.crypto.TestHdfsCryptoStreams 
  hadoop.hdfs.qjournal.client.TestQuorumJournalManager 
  hadoop.hdfs.qjournal.server.TestJournalNode 
  hadoop.hdfs.qjournal.server.TestJournalNodeSync 
  hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks 
  hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
  hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
  hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks 
  hadoop.hdfs.server.blockmanagement.TestReplicationPolicy 
  hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
  hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
  hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory 
  hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
  
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
  hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
  hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
  hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
  hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
  hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica 
  hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
  hadoop.hdfs.server.datanode.TestBlockRecovery 
  hadoop.hdfs.server.datanode.TestBlockScanner 
  hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
  hadoop.hdfs.server.datanode.TestDataNodeMetrics 
  hadoop.hdfs.server.datanode.TestDataNodeUUID 
  hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
  hadoop.hdfs.server.datanode.TestDirectoryScanner 
  hadoop.hdfs.server.datanode.TestHSync 
  hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame 
  hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
  hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
  hadoop.hdfs.server.federation.router.TestRouterAdminCLI 
  hadoop.hdfs.server.mover.TestStorageMover 
  

[jira] [Created] (HADOOP-15309) default maven in path under start-build-env.sh is the wrong one

2018-03-12 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-15309:
-

 Summary: default maven in path under start-build-env.sh is the 
wrong one
 Key: HADOOP-15309
 URL: https://issues.apache.org/jira/browse/HADOOP-15309
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Allen Wittenauer


PATH points to /usr/bin/mvn, should be /opt/maven/bin/mvn



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15279) increase maven heap size recommendations

2018-03-01 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-15279:
-

 Summary: increase maven heap size recommendations
 Key: HADOOP-15279
 URL: https://issues.apache.org/jira/browse/HADOOP-15279
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer


1G is just a bit too low for JDK8+surefire 2.20+hdfs unit tests running in 
parallel.  Bump it up a bit more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9085) start namenode failure,bacause pid of namenode pid file is other process pid or thread id before start namenode

2018-02-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-9085.
--
  Resolution: Duplicate
Target Version/s: 2.0.2-alpha, 2.0.1-alpha  (was: 2.0.1-alpha, 2.0.2-alpha)

> start namenode failure,bacause pid of namenode pid file is other process pid 
> or thread id before start namenode
> ---
>
> Key: HADOOP-9085
> URL: https://issues.apache.org/jira/browse/HADOOP-9085
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin, scripts
>Affects Versions: 2.0.1-alpha, 2.0.3-alpha
> Environment: NA
>Reporter: liaowenrui
>Priority: Major
>
> pid of namenode pid file is other process pid or thread id before start 
> namenode,start namenode will failure.because the pid of namenode pid file 
> will be checked use kill -0 command before start namenode in hadoop-daemo.sh 
> script.when pid of namenode pid file is other process pid or thread id,checkt 
> is use kil -0 command,and the kill -0 will return success.it means the 
> namenode is runing.in really,namenode is not runing.
> 2338 is dead namenode pid 
> 2305 is datanode pid
> cqn2:/tmp # kill -0 2338
> cqn2:/tmp # ps -wweLo pid,ppid,tid | grep 2338
>  2305 1  2338



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Switched PreCommit-Admin over to Apache Yetus

2018-01-16 Thread Allen Wittenauer
bcc: bui...@apache.org, d...@hbase.apache.org, d...@hive.apache.org, 
common-dev@hadoop.apache.org, d...@phoenix.apache.org, d...@oozie.apache.org

(These are all of the projects that had pending issues.  This is not 
all of the groups that actually rely upon this code...)

The recent JIRA upgrade broke PreCommit-Admin.  

This breakage was obviously an unintended consequence.  This python 
code hasn’t been touched in a very long time.  In fact, it was still coming 
from Hadoop SVN tree as it had never been migrated to git.  So it isn’t too 
surprising that after all this time something finally broke it.

Luckily, Apache Yetus was already in the progress of adopting it for a 
variety of reasons that I won't go into here.  With the breakage, this work 
naturally became more urgent.  With the help of the Apache Yetus community 
doing some quick reviews, I just switched PreCommit-Admin over to using the 
master version of the equivalent code in the Yetus source tree.  As soon as the 
community can get a 0.7.0 release out, we’ll switch it over from master to 
0.7.0 so that it can follow our regular release cadence.  This also means that 
JIRA issues can be filled against Yetus for bugs seen in the code base or for 
feature requests.  [Hopefully with code and docs attached. :) ]

In any case, with the re-activation of this job, all unprocessed jobs 
just kicked off.  So don't be too surprised by the influx of feedback.

As a sidenote, there are some other sticky issues with regards to 
precommit setups on Jenkins.  I'll be sending another note in the future on 
that though. I've had enough excitement for today. :)

Thanks!


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: PreCommit-Admin fails to fetch issue list from jira

2018-01-16 Thread Allen Wittenauer

> On Jan 16, 2018, at 1:41 PM, Vihang Karajgaonkar  wrote:
> 
> yes, thats correct. I had created 
> https://issues.apache.org/jira/browse/INFRA-15849 to get that fixed. But 
> looks like based on the comment there we need to modify that job to use REST 
> instead of SOAP? 

I think there is some confusion.  There are two parts in play.

The SOAP API is (or was, depending upon which fork we’re talking about) 
used to talk to JIRA to write comments.  That’s only used by older strains of 
test-patch.  Some versions have replaced it with REST calls.  [For the record, 
Apache Yetus replaced that code with REST calls years ago in one of the first 
set of patches.] FWIW, it’s been documented by Atlassian that SOAP was going to 
go away for quite a while.

precommit-admin uses raw HTTP calls.  The problem with it is that the 
URLs it is using for JIRA now require authentication.  The patch I’ve posted in 
YETUS-594 includes adding support to auth against those URLs.

> Do we know who owns that job?


In the case of test-patch, again, it depends upon the strain.  Most of 
the heavy hitters (Hadoop, HBase, Hive, Avro, Accumulo, … lucene and solr are 
in progress… umm… sure I’m missing some ) have switched over to using Apache 
Yetus instead of keeping a local copy. This way resources can be shared and 
work pooled together.  It is getting updated fairly regularly.

in the case of precommit-admin, technically it’s Hadoop.  But it is 
effectively no one since the branch it comes from doesn’t even exist in git. 
(It’s still pulled from svn) Post-YETUS-594, however, I’m planning on dropping 
a note to builds@ and switch precommit-admin over to come from Apache Yetus 
where it will actually get some attention.


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: PreCommit-Admin fails to fetch issue list from jira

2018-01-16 Thread Allen Wittenauer

> On Jan 16, 2018, at 1:27 PM, Vihang Karajgaonkar  wrote:
> 
> Hive pre-commit jobs are broken as well since the JIRA upgrade. Did you have 
> to make any changes to hadoop-precommit job to make it work? Any pointers 
> will help us fix ours as well.

Until precommit-admin gets fixed, no one is getting automated JIRA 
testing.


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: PreCommit-Admin fails to fetch issue list from jira

2018-01-16 Thread Allen Wittenauer

FWIW, the new JIRA has broken a few things thus far.  Akira fixed an issue with 
test-patch and I’ve got a fix for precommit-admin as part of YETUS-594.  As 
soon as it and YETUS-592 gets a review+commit, I’ll start the 0.7.0 release 
process.

Also, as a reminder, Apache Yetus uses +1’s from contributors as commit votes 
if anyone would like to help...

> On Jan 16, 2018, at 3:36 AM, Steve Loughran  wrote:
> 
> There's been a JIRA update. Ask at in...@apache.org ; they may have locked it 
> down more
> 
>> On 16 Jan 2018, at 10:38, Duo Zhang  wrote:
>> 
>> Started from this build
>> 
>> https://builds.apache.org/job/PreCommit-Admin/329113/
>> 
>> Have dug a bit, it seems that now jira does not allow our query to be run
>> by unauthorized user, you will get a 400 if you do not login. This is the
>> error page when accessing
>> 
>> https://issues.apache.org/jira/sr/jira.issueviews:searchrequest-xml/12323182/SearchRequest-12323182.xml?tempMax=50
>> HTTP Status 400 - A value with ID '12315621' does not exist for the field
>> 'project'.
>> 
>> *type* Status report
>> 
>> *message* *A value with ID '12315621' does not exist for the field
>> 'project'.*
>> 
>> *description* *The request sent by the client was syntactically incorrect.*
>> --
>> Apache Tomcat/8.5.6
>> 
>> The project '12315621 ' is Ranger. I do not have the permission to modify
>> the original filter, and do not have the permission to create a public
>> filter either. So for me there is no way to fix the problem.
>> 
>> Could the Ranger project check if you have changed some permission configs?
>> Or could someone who has the permission to create public filters creates a
>> new filter without Ranger to see if it works? There are plenty of projects
>> which rely on the PreCommit-Admin job.
>> 
>> Thanks.
> 
> 
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Apache Hadoop 3.0.0 GA is released

2017-12-18 Thread Allen Wittenauer

It’s significantly more concerning that 3.0.0-beta1 doesn’t show up here:

http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/release/index.html

It looks like they are missing from the source tag too.  I wonder what else is 
missing.


> On Dec 18, 2017, at 11:15 AM, Andrew Wang  wrote:
> 
> Moving general@ to BCC,
> 
> The main page and releases posts on hadoop.apache.org are pretty clear
> about this being a diff from beta1, am I missing something? Pasted below:
> 
> After four alpha releases and one beta release, 3.0.0 is generally
> available. 3.0.0 consists of 302 bug fixes, improvements, and other
> enhancements since 3.0.0-beta1. All together, 6242 issues were fixed as
> part of the 3.0.0 release series since 2.7.0.
> 
> Users are encouraged to read the overview of major changes
>  in 3.0.0. The GA release
> notes
> 
> and changelog
> 
> detail
> the changes since 3.0.0-beta1.
> 
> 
> 
> On Mon, Dec 18, 2017 at 10:32 AM, Arpit Agarwal 
> wrote:
> 
>> That makes sense for Beta users but most of our users will be upgrading
>> from a previous GA release and the changelog will mislead them. The webpage
>> does not mention this is a delta from the beta release.
>> 
>> 
>> 
>> 
>> 
>> *From: *Andrew Wang 
>> *Date: *Friday, December 15, 2017 at 10:36 AM
>> *To: *Arpit Agarwal 
>> *Cc: *general , "common-dev@hadoop.apache.org"
>> , "yarn-...@hadoop.apache.org" <
>> yarn-...@hadoop.apache.org>, "mapreduce-...@hadoop.apache.org" <
>> mapreduce-...@hadoop.apache.org>, "hdfs-...@hadoop.apache.org" <
>> hdfs-...@hadoop.apache.org>
>> *Subject: *Re: [ANNOUNCE] Apache Hadoop 3.0.0 GA is released
>> 
>> 
>> 
>> Hi Arpit,
>> 
>> 
>> 
>> If you look at the release announcements, it's made clear that the
>> changelog for 3.0.0 is diffed based on beta1. This is important since users
>> need to know what's different from the previous 3.0.0-* releases if they're
>> upgrading.
>> 
>> 
>> 
>> I agree there's additional value to making combined release notes, but
>> it'd be something additive rather than replacing what's there.
>> 
>> 
>> 
>> Best,
>> 
>> Andrew
>> 
>> 
>> 
>> On Fri, Dec 15, 2017 at 8:27 AM, Arpit Agarwal 
>> wrote:
>> 
>> 
>> Hi Andrew,
>> 
>> Thank you for all the hard work on this release. I was out the last few
>> days and didn’t get a chance to evaluate RC1 earlier.
>> 
>> The changelog looks incorrect. E.g. This gives an impression that there
>> are just 5 incompatible changes in 3.0.0.
>> http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/
>> hadoop-common/release/3.0.0/CHANGES.3.0.0.html
>> 
>> I assume you only counted 3.0.0 changes in this log excluding
>> alphas/betas. However, users shouldn’t have to manually compile
>> incompatibilities by summing up a/b release notes. Can we fix the changelog
>> after the fact?
>> 
>> 
>> 
>> 
>> On 12/14/17, 10:45 AM, "Andrew Wang"  wrote:
>> 
>>Hi all,
>> 
>>I'm pleased to announce that Apache Hadoop 3.0.0 is generally available
>>(GA).
>> 
>>3.0.0 GA consists of 302 bug fixes, improvements, and other
>> enhancements
>>since 3.0.0-beta1. This release marks a point of quality and stability
>> for
>>the 3.0.0 release line, and users of earlier 3.0.0-alpha and -beta
>> releases
>>are encouraged to upgrade.
>> 
>>Looking back, 3.0.0 GA is the culmination of over a year of work on the
>>3.0.0 line, starting with 3.0.0-alpha1 which was released in September
>>2016. Altogether, 3.0.0 incorporates 6,242 changes since 2.7.0.
>> 
>>Users are encouraged to read the overview of major changes
>> in 3.0.0. The GA
>> release
>>notes
>>> dist/hadoop-common/release/3.0.0/RELEASENOTES.3.0.0.html>
>> and changelog
>>> dist/hadoop-common/release/3.0.0/CHANGES.3.0.0.html>
>> 
>>detail
>>the changes since 3.0.0-beta1.
>> 
>>The ASF press release provides additional color and highlights some of
>> the
>>major features:
>> 
>>https://globenewswire.com/news-release/2017/12/14/
>> 1261879/0/en/The-Apache-Software-Foundation-Announces-
>> Apache-Hadoop-v3-0-0-General-Availability.html
>> 
>>Let me end by thanking the many, many contributors who helped with this
>>release line. We've only had three major releases in Hadoop's 10 year
>>history, and this is our biggest major release ever. It's an incredible
>>accomplishment for our 

Re: Same jenkins build running on 2 patches.

2017-12-02 Thread Allen Wittenauer

> On Dec 1, 2017, at 1:36 PM, Jason Lowe  wrote:
>  I thought the admin precommit build was kicking off the project-specific 
> precommit build with an attachment ID argument so the project precommit can 
> be consistent with the admin precommit build on what triggered the precommit 
> process.

 precommit-admin generates a list of JIRA issue ids and passes those to 
the build jobs.  The JIRA issue ids are generated from a simple JQL statement.  
precommit build is the first time the attachment id’s are determined.

> If the patch is tracked by attachment ID then I think the build would remain 
> consistent even when users attach new patches in the middle of the precommit 
> process.

Implementation of YETUS-504 will almost certainly eliminate the double 
download in Dockerized situations since the credentials will only be available 
outside of it.  In the process, certain testing scenarios also get dropped, but 
security is more important.

That said, fixing users attempting to prematurely optimize test runs is 
pretty low on the priority list.  For my own project list, some way to inform a 
user that only 20-30% of the HDFS unit tests in branch-2 are getting executed 
is way higher.  [trunk comes in at around 80% post-surefire 2.20.1.]  
precommit’s output doesn’t reflect tests that don’t run because surefire 
doesn’t report them either. See also SUREFIRE-1447.  


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Same jenkins build running on 2 patches.

2017-12-01 Thread Allen Wittenauer

> On Dec 1, 2017, at 12:18 PM, Rushabh Shah  wrote:
> Can someone explain me what happened ?

Yetus downloaded the patch to make sure it applied before bothering to 
do anything else to make sure it wasn’t going to burn cycles on the build boxes 
for no reason.  Docker mode was active so it then went to re-exec itself under 
Docker.  But it had to build the Docker image first. This can take anywhere 
from 9 minutes to 20 minutes, depending primarily on which branch’s Dockerfile 
was in use.  While this was going on, another two patches were uploaded.  
Docker build finishes. When Yetus re-exec’ed itself under Docker, it re-grabs 
the patch (since the world—including Yetus itself!--may be different now that 
it is Docker).  In this case, it grabbed the last of the newly uploaded patches 
and attempted to churn it’s way through it.

a) Uploading two patches at once has never ever worked and will likely 
never be made to work. (There are lots of reasons for this.)

b) Before uploading a new patch, wait for the feedback or at least make 
sure the Jenkins job is actually past “Determining needed tests” before 
uploading a new one.  Just be aware that test output is going to get very hard 
to follow with all of the cross posting.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-30 Thread Allen Wittenauer

> On Nov 30, 2017, at 1:07 AM, Rohith Sharma K S  
> wrote:
> 
> 
> >. If ATSv1 isn’t replaced by ATSv2, then why is it marked deprecated?
> Ideally it should not be. Can you point out where it is marked as deprecated? 
> If it is in historyserver daemon start, that change made very long back when 
> timeline server added. 


Ahh, I see where all the problems lie.  No one is paying attention to the 
deprecation message because it’s kind of oddly worded:

* It really means “don’t use ‘yarn historyserver’ use ‘yarn timelineserver’ ” 
* ‘yarn historyserver’ was removed from the documentation in 2.7.0
* ‘yarn historyserver’ doesn’t appear in the yarn usage output
* ‘yarn timelineserver’ runs the exact same class

There’s no reason for ‘yarn historyserver’ to exist in 3.x.  Just run ‘yarn 
timelineserver’ instead.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-25 Thread Allen Wittenauer

> On Nov 21, 2017, at 2:16 PM, Vinod Kumar Vavilapalli  
> wrote:
> 
>>> - $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start historyserver doesn't even 
>>> work. Not just deprecated in favor of timelineserver as was advertised.
>> 
>>  This works for me in trunk and the bash code doesn’t appear to have 
>> changed in a very long time.  Probably something local to your install.  (I 
>> do notice that the deprecation message says “starting” which is awkward when 
>> the stop command is given though.)  Also: is the deprecation message even 
>> true at this point?
> 
> 
> Sorry, I mischaracterized the problem.
> 
> The real issue is that I cannot use this command line when the MapReduce 
> JobHistoryServer is already started on the same machine.

The specific string is:

hadoop-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}.pid

More specifically, the pid handling code will conflict if the following 
are true:

* same machine (obviously)
* same subcommand name
* same HADOOP_IDENT_USER: which by default is the user name of 
whatever starts it… but was designed to be overridden way back in hadoop 0.X.

… which means for most production setups, this is probably not real a 
problem.


> So, it looks like in shell-scripts, there can ever be only one daemon of a 
> given name, irrespective of which daemon scripts are invoked.

Correct.  Naming multiple, different daemons the same thing is 
extremely anti-user.   In fact, I thought this was originally about the “other” 
history server.

> 
> We need to figure out two things here
>  (a) The behavior of this command. Clearly, it will conflict with the 
> MapReduce JHS - only one of them can be started on the same node.

… by the same user, by default.  Started by a different user or 
different HADOOP_IDENT_USER, it will come up just fine.

>  (b) We need to figure out if this V1 TimelineService should even be support 
> given ATSv2.

If ATSv1 isn’t replaced by ATSv2, then why is it marked deprecated?

> On Nov 22, 2017, at 9:45 AM, Brahma Reddy Battula  wrote:
> 
> 1) Change the name
> 2) Create PID based on the CLASS Name, here applicationhistoryserver and 
> jobhistoryserver
> 3) Use same as branch-2.9..i.e suffixing with mapred or yarn
> 
> 
> @allen, any thoughts on this..?

Using the classname works in this instance, but just as we saw with the 
router daemons, people tend to use the same class names when building different 
components. It also means that if different daemons can be started in different 
ways from the same class dependent upon options, this conflict will still 
exist.  Also, with dynamic commands, it is very possible to run the same daemon 
from multiple start points.

As part of this discussion, I think it’s important to recognize:

a) This is likely to be primarily impacting developers.
b) We’re talking about two daemons where one has been deprecated.
c) Calling two different daemons “history server” is just awful from an end 
user perspective.
d) There is already a work around in place if one absolutely needs to run both 
on the same node as the same user, just as people do with datanode and 
nodemanager today.



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-20 Thread Allen Wittenauer

The original release script and instructions broke the build up into 
three or so steps. When I rewrote it, I kept that same model. It’s probably 
time to re-think that.  In particular, it should probably be one big step that 
even does the maven deploy.  There’s really no harm in doing that given that 
there is still a manual step to release the deployed jars into the production 
area.

We just need need to:

a) add an option to do deploy instead of just install.  if c-r is in asf mode, 
always activate deploy
b) pull the maven settings.xml file (and only the maven settings file… we don’t 
want the repo!) into the docker build environment
c) consolidate the mvn steps

This has the added benefit of greatly speeding up the build by removing 
several passes.

Probably not a small change, but I’d have to look at the code.  I’m on 
a plane tomorrow morning though.

Also:

>> 
>> Major
>> - The previously supported way of being able to use different tar-balls
>> for different sub-modules is completely broken - common and HDFS tar.gz are
>> completely empty.
>> 
> 
> Is this something people use? I figured that the sub-tarballs were a relic
> from the project split, and nowadays Hadoop is one project with one release
> tarball. I actually thought about getting rid of these extra tarballs since
> they add extra overhead to a full build.

I’m guessing no one noticed the tar errors when running mvn -Pdist.  
Not sure when they started happening.

> >   - When did we stop putting CHANGES files into the source artifacts?
> 
> CHANGES files were removed by 
> https://issues.apache.org/jira/browse/HADOOP-11792

To be a bit more specific about it, the maven assembly for source only 
includes things (more or less) that are part of the git repo.  When CHANGES.txt 
was removed from the source tree, it also went away from the tar ball.  This 
isn’t too much of an issue in practice though given the notes are put up on the 
web, part of the binary tar ball, and can be generated by following the 
directions in BUILDING.txt.  I don’t remember if Hadoop uploads them into the 
dist area, but if not probably should.

> - $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start historyserver doesn't even 
> work. Not just deprecated in favor of timelineserver as was advertised.

This works for me in trunk and the bash code doesn’t appear to have 
changed in a very long time.  Probably something local to your install.  (I do 
notice that the deprecation message says “starting” which is awkward when the 
stop command is given though.)  Also: is the deprecation message even true at 
this point?

>> - Cannot enable new UI in YARN because it is under a non-default
>> compilation flag. It should be on by default.
>> 
> 
> The yarn-ui profile has always been off by default, AFAIK. It's documented
> to turn it on in BUILDING.txt for release builds, and we do it in
> create-release.
> 
> IMO not a blocker. I think it's also more of a dev question (do we want to
> do this on every YARN build?) than a release one.

-1 on making yarn-ui always build.

For what is effectively an optional component (the old UI is still 
there), it’s heavy dependency requirements make it a special burden outside of 
the Docker container.  If it can be changed such that it either always 
downloads the necessary bits (regardless of the OS/chipset!) and/or doesn’t 
kill the maven build if those bits can’t be found  (i.e., truly optional), then 
I’d be less opposed.  (and, actually, quite pleased because then the docker 
image build would be significantly faster.)



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-7410) Mavenize common RPM/DEB

2017-11-09 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7410.
--
Resolution: Won't Fix

> Mavenize common RPM/DEB
> ---
>
> Key: HADOOP-7410
> URL: https://issues.apache.org/jira/browse/HADOOP-7410
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Alejandro Abdelnur
>Assignee: Eric Yang
>
> Mavenize RPM/DEB generation



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSSION] Merging HDFS-7240 Object Store (Ozone) to trunk

2017-11-03 Thread Allen Wittenauer

> On Nov 3, 2017, at 12:08 PM, Stack  wrote:
> 
> On Sat, Oct 28, 2017 at 2:00 PM, Konstantin Shvachko 
> wrote:
> 
>> It is an interesting question whether Ozone should be a part of Hadoop.
> 
> I don't see a direct answer to this question. Is there one? Pardon me if
> I've not seen it but I'm interested in the response.

+1

Given:

* a completely different set of config files (ozone-site.xml, etc)
* package name is org.apache.hadoop.ozone, not 
org.apache.hadoop.hdfs.ozone

… it doesn’t really seem to want to be part of HDFS, much less Hadoop.

Plus hadoop-hdfs-project/hadoop-hdfs is already a battle zone when it comes to 
unit tests, dependencies, etc [*]

At a minimum, it should at least be using it’s own maven module for a 
lot of the bits that generates it’s own maven jars so that we can split this 
functionality up at build/test time.

At a higher level, this feels a lot like the design decisions that were 
made around yarn-native-services.  This feature is either part of HDFS or it’s 
not. Pick one.  Doing both is incredibly confusing for everyone outside of the 
branch.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15010) hadoop-resourceestimator's assembly buries it

2017-11-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-15010:
-

 Summary: hadoop-resourceestimator's assembly buries it
 Key: HADOOP-15010
 URL: https://issues.apache.org/jira/browse/HADOOP-15010
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, tools
Affects Versions: 2.9.0, 3.1.0
Reporter: Allen Wittenauer
Priority: Blocker


There's zero reason for this layout:

{code}
hadoop-3.1.0-SNAPSHOT/share/hadoop/tools/resourceestimator
 - bin
 - conf
 - data
{code}

Buried that far back, it might as well not exist.

Propose:

a) HADOOP-15009 to eliminate bin
b) Move conf file into etc/hadoop
c) keep data where it's at



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15009) hadoop-resourceestimator's shell scripts are a mess

2017-11-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-15009:
-

 Summary: hadoop-resourceestimator's shell scripts are a mess
 Key: HADOOP-15009
 URL: https://issues.apache.org/jira/browse/HADOOP-15009
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.1.0
Reporter: Allen Wittenauer
Priority: Critical


#1:

There's no reason for estimator.sh to exist.  Just make it a subcommand under 
yarn or whatever.  

#2:

It it's current form, it's missing a BUNCH of boilerplate that makes certain 
functionality completely fail.

#3

start/stop-estimator.sh is full of copypasta that doesn't actually do 
anything/work correctly.  Additionally, if estimator.sh doesn't exist, neither 
does this since yarn --daemon start/stop will do everything as necessary.  




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14977) Xenial dockerfile needs ant

2017-10-24 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-14977:
-

 Summary: Xenial dockerfile needs ant
 Key: HADOOP-14977
 URL: https://issues.apache.org/jira/browse/HADOOP-14977
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Allen Wittenauer


findbugs doesn't work without ant installed, for whatever reason:

{code}
[warning] /usr/bin/setBugDatabaseInfo: Unable to locate ant in /usr/share/java
[warning] /usr/bin/convertXmlToText: Unable to locate ant in /usr/share/java
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-10-24 Thread Allen Wittenauer

> On Oct 24, 2017, at 4:10 PM, Andrew Wang  wrote:
> 
> FWIW we've been running branch-3.0 unit tests successfully internally, though 
> we have separate jobs for Common, HDFS, YARN, and MR. The failures here are 
> probably a property of running everything in the same JVM, which I've found 
> problematic in the past due to OOMs.

Last time I looked, surefire was configured to launch unit tests in 
different JVMs.  But that might only be true in trunk.  Or maybe only for some 
of the subprojects.  
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-10-24 Thread Allen Wittenauer

My plan is currently to:

*  switch some of Hadoop’s Yetus jobs over to my branch with the YETUS-561 
patch to test it out. 
* if the tests work, work on getting YETUS-561 committed to yetus master
* switch jobs back to ASF yetus master either post-YETUS-561 or without it if 
it doesn’t work
* go back to working on something else, regardless of the outcome


> On Oct 24, 2017, at 2:55 PM, Chris Douglas <cdoug...@apache.org> wrote:
> 
> Sean/Junping-
> 
> Ignoring the epistemology, it's a problem. Let's figure out what's
> causing memory to balloon and then we can work out the appropriate
> remedy.
> 
> Is this reproducible outside the CI environment? To Junping's point,
> would YETUS-561 provide more detailed information to aid debugging? -C
> 
> On Tue, Oct 24, 2017 at 2:50 PM, Junping Du <j...@hortonworks.com> wrote:
>> In general, the "solid evidence" of memory leak comes from analysis of 
>> heapdump, jastack, gc log, etc. In many cases, we can locate/conclude which 
>> piece of code are leaking memory from the analysis.
>> 
>> Unfortunately, I cannot find any conclusion from previous comments and it 
>> even cannot tell which daemons/components of HDFS consumes unexpected high 
>> memory. Don't sounds like a solid bug report to me.
>> 
>> 
>> 
>> Thanks,?
>> 
>> 
>> Junping
>> 
>> 
>> 
>> From: Sean Busbey <bus...@cloudera.com>
>> Sent: Tuesday, October 24, 2017 2:20 PM
>> To: Junping Du
>> Cc: Allen Wittenauer; Hadoop Common; Hdfs-dev; 
>> mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
>> Subject: Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
>> 
>> Just curious, Junping what would "solid evidence" look like? Is the 
>> supposition here that the memory leak is within HDFS test code rather than 
>> library runtime code? How would such a distinction be shown?
>> 
>> On Tue, Oct 24, 2017 at 4:06 PM, Junping Du 
>> <j...@hortonworks.com<mailto:j...@hortonworks.com>> wrote:
>> Allen,
>> Do we have any solid evidence to show the HDFS unit tests going through 
>> the roof are due to serious memory leak by HDFS? Normally, I don't expect 
>> memory leak are identified in our UTs - mostly, it (test jvm gone) is just 
>> because of test or deployment issues.
>> Unless there is concrete evidence, my concern on seriously memory leak 
>> for HDFS on 2.8 is relatively low given some companies (Yahoo, Alibaba, 
>> etc.) have deployed 2.8 on large production environment for months. 
>> Non-serious memory leak (like forgetting to close stream in non-critical 
>> path, etc.) and other non-critical bugs always happens here and there that 
>> we have to live with.
>> 
>> Thanks,
>> 
>> Junping
>> 
>> 
>> From: Allen Wittenauer 
>> <a...@effectivemachines.com<mailto:a...@effectivemachines.com>>
>> Sent: Tuesday, October 24, 2017 8:27 AM
>> To: Hadoop Common
>> Cc: Hdfs-dev; 
>> mapreduce-...@hadoop.apache.org<mailto:mapreduce-...@hadoop.apache.org>; 
>> yarn-...@hadoop.apache.org<mailto:yarn-...@hadoop.apache.org>
>> Subject: Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
>> 
>>> On Oct 23, 2017, at 12:50 PM, Allen Wittenauer 
>>> <a...@effectivemachines.com<mailto:a...@effectivemachines.com>> wrote:
>>> 
>>> 
>>> 
>>> With no other information or access to go on, my current hunch is that one 
>>> of the HDFS unit tests is ballooning in memory size.  The easiest way to 
>>> kill a Linux machine is to eat all of the RAM, thanks to overcommit and 
>>> that's what this "feels" like.
>>> 
>>> Someone should verify if 2.8.2 has the same issues before a release goes 
>>> out ...
>> 
>> 
>>FWIW, I ran 2.8.2 last night and it has the same problems.
>> 
>>Also: the node didn't die!  Looking through the workspace (so the 
>> next run will destroy them), two sets of logs stand out:
>> 
>> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/ws/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>> 
>>and
>> 
>> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/ws/sourcedir/hadoop-hdfs-project/hadoop-hdfs/
>> 
>>It looks like my hunch is correct:  RAM in the HDFS unit tests are 
>> going through the roof.  It's also interesting how MANY log files t

Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-10-24 Thread Allen Wittenauer

> On Oct 23, 2017, at 12:50 PM, Allen Wittenauer <a...@effectivemachines.com> 
> wrote:
> 
> 
> 
> With no other information or access to go on, my current hunch is that one of 
> the HDFS unit tests is ballooning in memory size.  The easiest way to kill a 
> Linux machine is to eat all of the RAM, thanks to overcommit and that’s what 
> this “feels” like.
> 
> Someone should verify if 2.8.2 has the same issues before a release goes out …


FWIW, I ran 2.8.2 last night and it has the same problems.

Also: the node didn’t die!  Looking through the workspace (so the next 
run will destroy them), two sets of logs stand out:

https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/ws/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt

and

https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/ws/sourcedir/hadoop-hdfs-project/hadoop-hdfs/

It looks like my hunch is correct:  RAM in the HDFS unit tests are 
going through the roof.  It’s also interesting how MANY log files there are.  
Is surefire not picking up that jobs are dying?  Maybe not if memory is getting 
tight. 

Anyway, at the point, branch-2.8 and higher are probably fubar’d. 
Additionally, I’ve filed YETUS-561 so that Yetus-controlled Docker containers 
can have their RAM limits set in order to prevent more nodes going catatonic.



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-10-23 Thread Allen Wittenauer


With no other information or access to go on, my current hunch is that one of 
the HDFS unit tests is ballooning in memory size.  The easiest way to kill a 
Linux machine is to eat all of the RAM, thanks to overcommit and that’s what 
this “feels” like.

Someone should verify if 2.8.2 has the same issues before a release goes out …


> On Oct 23, 2017, at 12:38 PM, Subramaniam V K <subru...@gmail.com> wrote:
> 
> Hi Allen,
> 
> I had set up the build (or intended to) in anticipation 2.9 release. Thanks 
> for fixing the configuration!
> 
> We did face HDFS tests timeouts in branch-2 when run together but 
> individually the tests pass:
> https://issues.apache.org/jira/browse/HDFS-12620
> 
> Folks in HDFS, can you please take a look at HDFS tests in branch-2 as we are 
> not able to get even a single Yetus run to complete due to multiple test 
> failures/timeout.
> 
> Thanks,
> Subru
> 
> On Mon, Oct 23, 2017 at 11:26 AM, Vrushali C <vrushalic2...@gmail.com> wrote:
> Hi Allen,
> 
> I have filed https://issues.apache.org/jira/browse/YARN-7380 for the
> timeline service findbugs warnings.
> 
> thanks
> Vrushali
> 
> 
> On Mon, Oct 23, 2017 at 11:14 AM, Allen Wittenauer <a...@effectivemachines.com
> > wrote:
> 
> >
> > I’m really confused why this causes the Yahoo! QA boxes to go catatonic
> > (!?!) during the run.  As in, never come back online, probably in a kernel
> > panic. It’s pretty consistently in hadoop-hdfs, so something is going wrong
> > there… is branch-2 hdfs behaving badly?  Someone needs to run the
> > hadoop-hdfs unit tests to see what is going on.
> >
> > It’s probably worth noting that findbugs says there is a problem in the
> > timeline server hbase code.Someone should probably verify + fix that
> > issue.
> >
> >
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
> >
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-10-23 Thread Allen Wittenauer

I’m really confused why this causes the Yahoo! QA boxes to go catatonic (!?!) 
during the run.  As in, never come back online, probably in a kernel panic. 
It’s pretty consistently in hadoop-hdfs, so something is going wrong there… is 
branch-2 hdfs behaving badly?  Someone needs to run the hadoop-hdfs unit tests 
to see what is going on.

It’s probably worth noting that findbugs says there is a problem in the 
timeline server hbase code.Someone should probably verify + fix that issue.



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-10-21 Thread Allen Wittenauer

To whoever set this up:

There was a job config problem where the Jenkins branch parameter wasn’t passed 
to Yetus.  Therefore both of these reports have been against trunk.  I’ve fixed 
this job (as well as the other jobs) to honor that parameter.  I’ve kicked off 
a new run with these changes.




> On Oct 21, 2017, at 9:58 AM, Apache Jenkins Server 
>  wrote:
> 
> For more details, see 
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/
> 
> [Oct 20, 2017 9:27:59 PM] (stevel) HADOOP-14942. DistCp#cleanup() should 
> check whether jobFS is null.
> [Oct 21, 2017 12:19:29 AM] (subru) YARN-6871. Add additional deSelects params 
> in
> 
> 
> 
> 
> -1 overall
> 
> 
> The following subsystems voted -1:
>asflicense unit
> 
> 
> The following subsystems voted -1 but
> were configured to be filtered/ignored:
>cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
> 
> 
> The following subsystems are considered long running:
> (runtime bigger than 1h  0m  0s)
>unit
> 
> 
> Specific tests:
> 
>Failed junit tests :
> 
>   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 
>   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
>   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
>   hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency 
>   hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler 
>   hadoop.yarn.server.resourcemanager.TestApplicationMasterService 
>   hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
>   hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels 
>   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSLeafQueue 
>   
> hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
>  
>   hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler 
>   hadoop.yarn.server.resourcemanager.TestApplicationMasterLauncher 
>   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler 
>   hadoop.yarn.server.resourcemanager.TestRMHA 
>   
> hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification 
>   
> hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched 
>   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps 
>   hadoop.yarn.server.resourcemanager.rmcontainer.TestRMContainerImpl 
>   
> hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation 
>   hadoop.yarn.server.resourcemanager.TestRMHATimelineCollectors 
>   hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
>   hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
>   
> hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities
>  
>   
> hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler 
>   
> hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerLazyPreemption
>  
>   
> hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerDynamicBehavior
>  
>   hadoop.yarn.server.TestDiskFailures 
> 
>Timed out junit tests :
> 
>   
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestZKConfigurationStore
>  
>   
> org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
>   org.apache.hadoop.yarn.server.resourcemanager.TestLeaderElectorService 
>   org.apache.hadoop.mapred.pipes.TestPipeApplication 
> 
> 
>   cc:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-compile-cc-root.txt
>   [4.0K]
> 
>   javac:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-compile-javac-root.txt
>   [284K]
> 
>   checkstyle:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-checkstyle-root.txt
>   [17M]
> 
>   pylint:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-patch-pylint.txt
>   [20K]
> 
>   shellcheck:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-patch-shellcheck.txt
>   [20K]
> 
>   shelldocs:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-patch-shelldocs.txt
>   [12K]
> 
>   whitespace:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/whitespace-eol.txt
>   [8.5M]
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/whitespace-tabs.txt
>   [292K]
> 
>   javadoc:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/diff-javadoc-javadoc-root.txt
>   [760K]
> 
>   unit:
> 
>   
> https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>   [308K]
>   

[jira] [Resolved] (HADOOP-14961) Docker failed to build yetus/hadoop:0de40f0: Oracle JDK 8 is NOT installed

2017-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-14961.
---
Resolution: Fixed

> Docker failed to build yetus/hadoop:0de40f0: Oracle JDK 8 is NOT installed
> --
>
> Key: HADOOP-14961
> URL: https://issues.apache.org/jira/browse/HADOOP-14961
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 3.1.0
>Reporter: John Zhuge
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/13546/console 
> {noformat} 
> Downloading Oracle Java 8... 
> --2017-10-18 18:28:11-- 
> http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz
>  
> Resolving download.oracle.com (download.oracle.com)... 
> 23.59.190.131, 23.59.190.130 
> Connecting to download.oracle.com (download.oracle.com)|23.59.190.131|:80... 
> connected. 
> HTTP request sent, awaiting response... 302 Moved Temporarily 
> Location: 
> https://edelivery.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz
>  [following] 
> --2017-10-18 18:28:11-- 
> https://edelivery.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz
>  
> Resolving edelivery.oracle.com (edelivery.oracle.com)... 
> 23.39.16.136, 2600:1409:a:39c::2d3e, 2600:1409:a:39e::2d3e 
> Connecting to edelivery.oracle.com 
> (edelivery.oracle.com)|23.39.16.136|:443... connected. 
> HTTP request sent, awaiting response... 302 Moved 
> Temporarily 
> Location: 
> http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz?AuthParam=1508351411_3d448519d55b9741af15953ef5049a7c
>  [following] 
> --2017-10-18 18:28:11-- 
> http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz?AuthParam=1508351411_3d448519d55b9741af15953ef5049a7c
>  
> Connecting to download.oracle.com (download.oracle.com)|23.59.190.131|:80... 
> connected. 
> HTTP request sent, awaiting response... 404 Not Found 
> 2017-10-18 18:28:12 ERROR 404: Not Found. 
> download failed 
> Oracle JDK 8 is NOT installed. 
> {noformat}
> Looks like Oracle JDK 8u144 is no longer available for download using that 
> link. 8u151 and 8u152 are available.
> Many of last 10 https://builds.apache.org/job/PreCommit-HADOOP-Build/ jobs 
> failed the same way, all on build host H1 and H6.
> [~aw] has a patch available in HADOOP-14816 "Update Dockerfile to use Xenial" 
> for a long term fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14945) create-release should work on jenkins

2017-10-12 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-14945:
-

 Summary: create-release should work on jenkins
 Key: HADOOP-14945
 URL: https://issues.apache.org/jira/browse/HADOOP-14945
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



YARN native services Re: 2017-10-06 Hadoop 3 release status update

2017-10-09 Thread Allen Wittenauer

> On Oct 6, 2017, at 5:51 PM, Eric Yang  wrote:
> yarn application -deploy –f spec.json
> yarn application -stop 
> yarn application -restart 
> yarn application -remove 
> 
> and
> 
> yarn application –list will display both application list from RM as well as 
> docker services?

IMO, that makes much more sense. [*] I’m trying think of a reason why 
I’d care if something was using this API or not.  It’s not like users can’t run 
whatever they want as part of their job now.  The break out is really only 
necessary so I have an idea if something is running that is using the REST API 
daemon. But more on that later….

> I think the development team was concerned that command structure overload 
> between batch applications and long running services.  In my view, there is 
> no difference, they are all applications.  The only distinction is the 
> launching and shutdown of services may be different from batch jobs.  I think 
> user can get used to these command structures without creating additional 
> command grouping.

I pretty much agree.  In fact, I’d love to see ‘yarn application’ even 
replace ‘yarn jar’. One Interface To Rule Them All.

I was under the impression (and, maybe this was my misunderstanding. if 
so, sorry) that “the goal” for this first pass was to integrate the existing 
Apache Slider functionality into YARN.  As it stands, I don’t think those goals 
have been met.  It doesn’t seem to be much different than just writing a shell 
profile to call slider directly:

---
function yarn_subcommand_service
{
   exec slider “$@“
}


(or whatever). Plus doing it this way, one gets the added benefit of the 
SIGNIFICANTLY better documentation. (Seriously: well done that team)

From an outside perspective, the extra daemon for running the REST API 
seems like when it should have clicked that the project is going off the rails 
and missing the whole “integration” aspect. Integrating the REST API into the 
RM from day one and the command separation would have also stuck out. If the RM 
runs the REST API, it now becomes a problem of “how does a user launch more 
than just a jar easily?” A problem that Hadoop has had since nearly day one.  
Redefining the “application” subcommand sounds like a reasonable way to move 
forward on that problem while also dropping the generic sounding "service" 
subcommand. 

But all that said, it feels like direct integration was avoided from 
the beginning and I’m unclear as to why. Take this line from the quick start 
documentation: 

"Start all the hadoop components HDFS, YARN as usual.”

a) This sentence is pretty much a declaration that this feature 
set isn’t part of “YARN”. 
b) Minimally, this should link to ClusterSetup. 

Anyway, yes, please work on removing all of these extra adoption 
barriers and increased workload on admin teams with Yet Another Daemon to 
monitor and collect metrics. 

Thanks!

[*] - I’m reminded of a conversation I had with a PMC member year or three ago 
about HDFS. They proudly almost defiantly stated that the HDFS command 
structure is such because it resembles the protocols and that was great. Guess 
what: users’ don’t care about how something is implemented, much less the 
protocols that are used to drive it. They care about consistency, EOU, and all 
those feel good things that make applications a joy to use. They have more 
important stuff to do. Copying the protocols onto the command line only help 
the person who wrote it and no one else. It’s hard not to walk away from 
playing with YARN in this branch as exhibiting those same anti-user behaviors.



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: 2017-10-06 Hadoop 3 release status update

2017-10-06 Thread Allen Wittenauer

> On Oct 6, 2017, at 1:31 PM, Andrew Wang  wrote:
> 
>   - Still waiting on Allen to review YARN native services feature.

Fake news.  

I’m still -1 on it, at least prior to a patch that posted late 
yesterday. I’ll probably have a chance to play with it early next week.


Key problems:

* still haven’t been able to bring up dns daemon due to lacking 
documentation

* it really needs better naming and command structures.  When put into 
the larger YARN context, it’s very problematic:

$ yarn —daemon start resourcemanager

vs.

$ yarn —daemon start apiserver 

if you awoke from a deep sleep from inside a cave, which one 
would you expect to “start YARN”? Made worse that the feature is called 
“YARN services” all over the place.

$ yarn service foo

… what does this even mean?

It would be great if other outsiders really looked hard at this branch 
to give the team feedback.   Once it gets released, it’s gonna be too late to 
change it….

As a sidenote:

It’d be great if the folks working on YARN spent some time 
consolidating daemons.  With this branch, it now feels like we’re approaching 
the double digit area of daemons to turn on all the features.  It’s well past 
ridiculous, especially considering we still haven’t replaced the MRJHS’s 
feature set to the point we can turn it off.


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14908) CrossOriginFilter should trigger regex on more input

2017-09-26 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-14908:
-

 Summary: CrossOriginFilter should trigger regex on more input
 Key: HADOOP-14908
 URL: https://issues.apache.org/jira/browse/HADOOP-14908
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, security
Affects Versions: 3.0.0-beta1
Reporter: Allen Wittenauer


Currently,  CrossOriginFilter.java limits regex matching only if there is an 
asterisk (*) in the config.

{code}
if (allowedOrigin.contains("*")) {
{code}

This means that entries such as:

{code}
http?://foo.example.com
https://[a-z][0-9].example.com
{code}

... and other patterns that succinctly limit the input space need to either be 
fully expanded or dramatically have their space increased by using an asterisk 
in order to pass through the filter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14874) Secure NFS and DataNode related environment variable need to be added as hadoop_deprecate_envvar in hdfs_config.sh

2017-09-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-14874.
---
Resolution: Invalid

I'm going to close this as invalid since deprecation warnings show up when 
these commands are specifically used.

> Secure NFS and DataNode related environment variable need to be added as 
> hadoop_deprecate_envvar in hdfs_config.sh
> --
>
> Key: HADOOP-14874
> URL: https://issues.apache.org/jira/browse/HADOOP-14874
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>
> Following config variables should be added as hadoop_deprecate_envvar in 
> hdfs_config.sh
> {code}
> HADOOP_PRIVILEGED_NFS_USER
> HADOOP_SECURE_DN_PID_DIR
> HADOOP_PRIVILEGED_NFS_PID_DIR
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Trunk fails

2017-09-19 Thread Allen Wittenauer

> On Sep 19, 2017, at 6:48 AM, Brahma Reddy Battula 
>  wrote:
> 
> Can we run "mvn install" and "compile" for all the modules after applying the 
> patch(we can skip shadeclients)

We need to get over this idea that precommit is going to find all 
problems every time.  Committers actually do need to spend some time with a 
patch.  Besides, in this particular case, it was shading that actually broke…  
which really makes me want to remove -DskipShade from the pom.xmls.  It’s 
clearly getting abused.



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: qbt is failiing///RE: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-09-19 Thread Allen Wittenauer

> On Sep 19, 2017, at 6:35 AM, Brahma Reddy Battula 
>  wrote:
> 
> qbt is failing from two days with following errors, any idea on this..?

Nothing to be too concerned about.

This is what it looks like when a build server gets bounced or crashed. 
 INFRA team knows our jobs take forever so they rarely wait for them to finish 
if they are doing upgrades.  They’ve been doing that work lately; you can 
follow the action on builds@.





-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14873) CentOS 6.6 + gcc 4.4 + cmake 3.1 don't get C99

2017-09-15 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-14873:
-

 Summary: CentOS 6.6 + gcc 4.4 + cmake 3.1 don't get C99
 Key: HADOOP-14873
 URL: https://issues.apache.org/jira/browse/HADOOP-14873
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0-beta1
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer


C99 mode isn't getting invoked for containter-executor on CentOS 6.6 + gcc 4.4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Can we make our precommit test robust to dependency changes while staying usable?

2017-09-14 Thread Allen Wittenauer

> On Sep 14, 2017, at 11:01 AM, Sean Busbey  wrote:
> 
>> Committers MUST check the qbt output after a commit.  They MUST make sure
> their commit didn’t break something new.
> 
> How do we make this easier / more likely to happen?
> 
> For example, I don't see any notice on HADOOP-14654 that the qbt
> post-commit failed. Is this a timing thing? Did Steve L just notice the
> break before we could finish the 10 hours it takes to get qbt done?

qbt doesn't update JIRA because...

> 
> How solid would qbt have to be for us to do something drastic like
> auto-revert changes after a failure?


... I have never seen the unit tests for Hadoop pass completely.  So it 
would always fail every JIRA that it was testing. There's no point in enabling 
the JIRA issue update or anything like that until our unit tests actually get 
reliable.  But that also means we're reliant upon the community to self police. 
 That is also failing.

Prior to Yetus getting involved, the only unit tests that would 
reliably pass was mapreduce.  The rest would almost always fail. It had been 
that way for years.

Someone should probably invest some time into integrating the HBase 
flaky test code a) into Yetus and then b) into Hadoop.

It's also worth pointing out that the YARN findbugs error has been 
there for about six months.  It would also fail the build.


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Can we make our precommit test robust to dependency changes while staying usable?

2017-09-14 Thread Allen Wittenauer

> On Sep 14, 2017, at 8:03 AM, Sean Busbey  wrote:
> 
> * HADOOP-14654 updated commons-httplient to a new patch release in
> hadoop-project
> * Precommit checked the modules that changed (i.e. not many)
> * nightly had Azure support break due to a change in behavior.

OK, so it worked as coded/designed.

> Is this just the cost of our approach to precommit vs post commit testing?

Yes.  It’s a classic speed vs. size computing problem.

test-patch: quick but only runs a subset of tests
qbt: comprehensive but takes a very long time

Committers MUST check the qbt output after a commit.  They MUST make 
sure their commit didn’t break something new.

> One approach: do a dependency:list of each module and for those that show a
> change with the patch we run tests there.

As soon as you change something like junit, you’re running over 
everything … 

Plus, let’s get real: there is a large contingent of committers that 
barely take the time to read or even comprehend the current Yetus output.  
Adding *more* output is the last thing we want to do.

> This will cause a slew of tests to run when dependencies change. For the
> change in HADOOP-14654 probably we'd just have to run at the top level.

… e.g., exactly what qbt does for 10+ hours every night.

It’s important to also recognize that we need to be “good citizens” in 
the ASF. If we can do dependency checking in one 10 hour streak vs. several, 
that reduces the load on the ASF build infrastructure.



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Merge yarn-native-services branch into trunk

2017-09-13 Thread Allen Wittenauer

> On Sep 8, 2017, at 9:25 AM, Jian He  wrote:
> 
> Hi Allen,
> The documentations are committed. Please check QuickStart.md and others in 
> the same folder.
> YarnCommands.md doc is updated to include new commands.
> DNS default port is also documented. 
> Would you like to give a look and see if it address your concerns ?

Somewhat. Greatly improved, but there’s still way too much “we’re 
working on this” and “here’s a link to a JIRA” and just general brokenness 
going on.

Here’s some examples from concepts.  Concepts!  The document I’d expect 
to give me very basic “when we talk about X, we mean Y” definitions:

"A host of scheduling features are being developed to support long running 
services.”

Yeah, ok?  How is this a concept?

  or

"[YARN-3998](https://issues.apache.org/jira/browse/YARN-3998) 
implements a retry-policy to let NM re-launch a service container when it 
fails.”


The patch itself went through nine revisions and a long discussion. 
Would an end user care about the details in that JIRA?  

If the answer to the last question is YES, then the documentation has 
failed.  The whole point of documentation is so they don’t have to go digging 
into the details of the implementation, the decision process that got us there, 
etc.  If they care enough about the details, they’ll run through the changelog 
and click on the JIRA link there.  If the summary line of the changelog isn’t 
obvious, well… then we need better summaries.

etc, etc.

...

The sleep example is nice.  Now, let’s see a non-toy example:  multiple 
instances of Apache httpd or MariaDB or something real and not from the Hadoop 
echo chamber (e.g., non-JVM-based).  If this is for “native” services, this 
shouldn’t be a problem, right?  Give a real example and users will buy what 
you’re selling.  I also think writing the docs and providing an example of 
doing something big and outside the team’s comfort zone will clarify where end 
users are going to need more help than what’s being provided.  Getting a 
MariaDB instance or three up will help tremendously here.

Which reminds me: something the documentation doesn’t cover is storage. 
What happens to it, where does it come from, etc, etc.  That’s an important 
detail that I didn’t see covered.  (I may have missed it.)  

…

Why are there directions to enable other, partially unrelated services 
in here?  Shouldn’t there be pointers to their specific documentation?  Is the 
expectation that if the requirements for those other services change that 
contributors will need to update multiple documents?

"Start the DNS server”

Just… yikes.

a) yarn classname … This is not how we do user-facing things. 
The fact it’s not really possible for a *daemon* to be put in the 
YarnCommands.md doc should be a giant red flag that something isn’t going 
correctly here.
b) no jsvc support for something that it’s strongly hinted at 
wanting to run privileged = an instant -1 for failing basic security practices. 
 There’s zero reason for it to be running continually as root.
c) If this would have been hooked into the shell scripts 
appropriately, logs, user switching, etc would have been had for free.
d) Where’s stop?  Right. Since it’s outside the scripts, there 
is no pid support so one has to do all of that manually….


Given:

 "3. Supports reverse lookups (name based on IP). Note, this works only 
for Docker containers.”

then:

"It should not be used as a fully-functional corporate DNS.”

Scratch corporate.  It’s not a fully functional DNS server if it can’t do 
reverse lookups.  (Which, ironically, means it’s not suitable for use with 
Apache Hadoop, given it requires both fwd and rev DNS ...)



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Patch testing on Windows [BETA]

2017-09-12 Thread Allen Wittenauer
I’m a little hesitant to share this because it’s really Not Quite Ready 
for primetime, but I figured others might want to play with it early anyway.  


https://builds.apache.org/view/H-L/view/Hadoop/job/Precommit-hadoop-win/

Will let you test patches on Windows.  It does have some big caveats though:

* It will NOT update the JIRA.  You’ll need to go back and check on the 
results later.
* It pre-applies two other patches to the source tree: 
* HADOOP-14667 modifies how Visual Studio is used.  This patch 
still needs some cleanup in order to work in a compatible way with precommit.
* HADOOP-14696 changes how the parallel directories are created 
during unit tests.  Steve had some problems with it that I haven’t been able to 
replicate.  I’ll likely just un-optimize the changes at some point.
* It currently only runs on the windows-2012-2 node.  We just need 
INFRA-15010 to be completed on the other nodes.
* It’s running a slightly modified version of hadoop’s Apache Yetus 
personality (see YETUS-545).
* It’s using a shared maven cache, so there is a risk of classes 
missing/corruption like we used to have on the Linux test boxes two years ago.  
This is an easy fix and I just haven’t gotten around to it.
* A good number of the unit tests on Windows are really broken.  Badly. 
 Let this be a catalyst to fix them.




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Fwd: Moving Java Forward Faster

2017-09-07 Thread Allen Wittenauer


> Begin forwarded message:
> 
> From: "Rory O'Donnell" 
> Subject: Moving Java Forward Faster
> Date: September 7, 2017 at 2:12:45 AM PDT
> To: "strub...@yahoo.de >> Mark Struberg" 
> Cc: rory.odonn...@oracle.com, abdul.kolarku...@oracle.com, 
> balchandra.vai...@oracle.com, dalibor.to...@oracle.com, bui...@apache.org
> Reply-To: bui...@apache.org
> 
> Hi Mark & Gavin,
> 
> Oracle is proposing a rapid release model for Java SE going-forward.
> 
> The high points are highlighted below, details of the changes can be found on 
> Mark Reinhold’s blog [1] , OpenJDK discussion email list [2].
> 
> Under the proposed release model, after JDK 9, we will adopt a strict, 
> time-based model with a new major release every six months, update releases 
> every quarter, and a long-term support release every three years.
> 
> The new JDK Project will run a bit differently than the past "JDK $N" 
> Projects:
> 
> - The main development line will always be open but fixes, enhancements, and 
> features will be merged only when they're nearly finished. The main line will 
> be Feature Complete [3] at all times.
> 
> - We'll continue to use the JEP Process [4] for new features and other 
> significant changes. The bar to target a JEP to a specific release will, 
> however, be higher since the work must be Feature Complete in order to go in. 
> Owners of large or risky features will be strongly encouraged to split such 
> features up into smaller and safer parts, to integrate earlier in the release 
> cycle, and to publish separate lines of early-access builds prior to 
> integration.
> 
> The JDK Updates Project will run in much the same way as the past "JDK $N" 
> Updates Projects, though update releases will be strictly limited to fixes of 
> security issues, regressions, and bugs in newer features.
> 
> Related to this proposal, we intend to make a few changes in what we do:
> 
> - Starting with JDK 9 we'll ship OpenJDK builds under the GPL [5], to make it 
> easier for developers to deploy Java applications to cloud environments. 
> We'll initially publish OpenJDK builds for Linux/x64, followed later by 
> builds for macOS/x64 and Windows/x64.
> 
> - We'll continue to ship proprietary "Oracle JDK" builds, which include 
> "commercial features" [6] such as Java Flight Recorder and Mission Control 
> [7], under a click-through binary-code license [8]. Oracle will continue to 
> offer paid support for these builds.
> 
> - After JDK 9 we'll open-source the commercial features in order to make the 
> OpenJDK builds more attractive to developers and to reduce the differences 
> between those builds and the Oracle JDK. This will take some time, but the 
> ultimate goal is to make OpenJDK and Oracle JDK builds completely 
> interchangeable.
> 
> - Finally, for the long term we'll work with other OpenJDK contributors to 
> establish an open build-and-test infrastructure. This will make it easier to 
> publish early-access builds for features in development, and eventually make 
> it possible for the OpenJDK Community itself to publish authoritative builds 
> of the JDK.
> 
> Questions , comments, feedback to OpenJDK discuss mailing list [2]
> 
> Rgds,Rory
> 
> [1]https://mreinhold.org/blog/forward-faster
> [2]http://mail.openjdk.java.net/pipermail/discuss/2017-September/004281.html
> [3]http://openjdk.java.net/projects/jdk8/milestones#Feature_Complete
> [4]http://openjdk.java.net/jeps/0
> [5]http://openjdk.java.net/legal/gplv2+ce.html
> [6]http://www.oracle.com/technetwork/java/javase/terms/products/index.html
> [7]http://www.oracle.com/technetwork/java/javaseproducts/mission-control/index.html
> [8]http://www.oracle.com/technetwork/java/javase/terms/license/index.html
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Merge yarn-native-services branch into trunk

2017-09-06 Thread Allen Wittenauer

> On Sep 5, 2017, at 6:23 PM, Jian He  wrote:
> 
>>  If it doesn’t have all the bells and whistles, then it shouldn’t be on 
>> port 53 by default.
> Sure, I’ll change the default port to not use 53 and document it.
>>  *how* is it getting launched on a privileged port? It sounds like the 
>> expectation is to run “command” as root.   *ALL* of the previous daemons in 
>> Hadoop that needed a privileged port used jsvc.  Why isn’t this one? These 
>> questions matter from a security standpoint.  
> Yes, it is running as “root” to be able to use the privileged port. The DNS 
> server is not yet integrated with the hadoop script. 
> 
>> Check the output.  It’s pretty obviously borked:
> Thanks for pointing out. Missed this when rebasing onto trunk.


Please correct me if I’m wrong, but the current summary of the branch, 
post these changes, looks like:

* A bunch of mostly new Java code that may or may not have 
javadocs (post-revert YARN-6877, still working out HADOOP-14835)
* ~1/3 of the docs are roadmap/TBD
* ~1/3 of the docs are for an optional DNS daemon that has no 
end user hook to start it
* ~1/3 of the docs are for a REST API that comes from some 
undefined daemon (apiserver?)
* Two new, but undocumented, subcommands to yarn
* There are no docs for admins or users on how to actually 
start or use this completely new/separate/optional feature

How are outside people (e.g., non-branch committers) supposed to test 
this new feature under these conditions?
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: why the doxia-snapshot dependency

2017-09-06 Thread Allen Wittenauer

> On Sep 6, 2017, at 9:53 AM, Steve Loughran  wrote:
> 
> Well, it turns out not to like depth-4 MD tags, of the form  : DOXIA-533 
> , though that looks like a long-standing issue, not a regression

Yup. 

> workaround: don't use level4 titles. And do check locally before bothering to 
> upload the patch

That’s actually one of the side-benefits.  Most contributors never 
check their javadoc or site generation, leaving that up to Yetus. It’s 
obviously less than ideal but whatcha gonna do? Anyway, if site goes into an 
infinite loop, it basically eats up resources on the QA boxes until maven GCs.  
(Jenkins has trouble killing docker containers.  We probably need to write some 
trap code in Yetus.)
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: why the doxia-snapshot dependency

2017-09-06 Thread Allen Wittenauer

> On Sep 6, 2017, at 7:20 AM, Steve Loughran  wrote:
> 
> 
> Every morning my laptop downloads the doxia 1.8 snapshot for its build
> 
> 

….

> This implies that the build isn't reproducible, which isn't that bad for a 
> short-lived dev branch, but not what we want for any releases


This version of doxia includes an upgraded version of the markdown 
processor.  Combined with the upgraded maven-site-plugin, fixes two very 
important bugs:

* MUCH better handling of URLs.  Older versions would exit with failure 
if it hit hdfs:// as a URL, despite being perfectly legal. [I’ve been 
"hand-fixing” release notes and the like to avoid hitting this one.]
* Parser doesn’t have the infinite loop bug when it hit certain 
combinations of broken markdown, usually tables.

I agree that it’s… less than ideal. 

When I wrote the original version of that patch months ago, I was 
hoping it was a stop-gap.  Worst case, we publish our own version of the 
plugin.  But given that users will be empowered to fetch their own notes at 
build time, I felt it was important that this be more bullet proof…
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Merge yarn-native-services branch into trunk

2017-09-05 Thread Allen Wittenauer

> On Sep 5, 2017, at 3:12 PM, Gour Saha  wrote:
> 
> 2) Lots of markdown problems in the NativeServicesDiscovery.md document.
> This includes things like Œyarnsite.xml¹ (missing a dash.)
> 
> The md patch uploaded to YARN-5244 had some special chars. I fixed those
> in YARN-7161.


It’s a lot more than just special chars I think.  Even github (which 
has a way better markdown processor than what we’re using for the site docs) is 
having trouble rendering it:

https://github.com/apache/hadoop/blob/51c39c4261236ab714fe0ec8d00753dc4c6406ee/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-services/NativeServicesDiscovery.md

e.g., all of those ‘###’ are likely missing a space.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Merge yarn-native-services branch into trunk

2017-09-05 Thread Allen Wittenauer

> On Sep 5, 2017, at 2:53 PM, Jian He  wrote:
> 
>> Based on the documentation, this doesn’t appear to be a fully function DNS 
>> server as an admin would expect (e.g., BIND, Knot, whatever).  Where’s 
>> forwarding? How do I setup notify? Are secondaries even supported? etc, etc.
> 
> It seems like this is a rehash of some of the discussion you and others had 
> on the JIRA. The DNS here is a thin layer backed by service registry. My 
> understanding from the JIRA is that there are no claims that this is already 
> a DNS with all the bells and whistles - its goal is mainly to expose dynamic 
> services running on YARN as end-points. Clearly, this is an optional daemon, 
> if the provided feature set is deemed insufficient, an alternative solution 
> can be plugged in by specific admins because the DNS piece is completely 
> decoupled from the rest of native-services. 

If it doesn’t have all the bells and whistles, then it shouldn’t be on 
port 53 by default. It should also be documented that one *can’t* do these 
things.  If the standard config is likely to be a “real” server on port 53 
either acting as a secondary to the YARN one or at least able to forward 
queries to it, then these need to get documented.  As it stands, operations 
folks are going to be taken completely by surprise by some relatively random 
process sitting on a very well established port.

>> In fact:  was this even tested on port 53? How does this get launched such 
>> that it even has access to open port 53?  I don’t see any calls to use the 
>> secure daemon code in the shell scripts. Is there any jsvc voodoo or is it 
>> just “run X as root”?
> 
> Yes, we have tested this DNS server on port 53 on a cluster by running the 
> DNS server as root user. The port is clearly configurable, so the admin has 
> two options. Run as root + port 53. Run as non-root + non-privileged port. We 
> tested and left it as port 53 to keep it on a standard DNS port. It is 
> already documented as such though I can see that part can be improved a 
> little.

*how* is it getting launched on a privileged port? It sounds like the 
expectation is to run “command” as root.   *ALL* of the previous daemons in 
Hadoop that needed a privileged port used jsvc.  Why isn’t this one? These 
questions matter from a security standpoint.  

>>  4) Post-merge, yarn usage information is broken.  This is especially 
>> bad since it doesn’t appear that YarnCommands was ever updated to include 
>> the new sub-commands.
> 
> The “yarn” usage command is working for me. what do you mean ? 

Check the output.  It’s pretty obviously borked:

===snip

Daemon Commands:

nodemanager  run a nodemanager on each worker
proxyserver  run the web app proxy server
resourcemanager  run the ResourceManager
router   run the Router daemon
timelineserver   run the timeline server

Run a service Commands:

service  run a service

Run yarn-native-service rest server Commands:

apiserverrun yarn-native-service rest server


===snip===

> Yeah, looks like some previous features also forgot to update YarnCommands.md 
> for the new sub commands 

Likely.  But I was actually interested in playing with this one to 
compare it to the competition.  [Lucky you. ;) ]  But with pretty much zero 
documentation….



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Merge yarn-native-services branch into trunk

2017-09-05 Thread Allen Wittenauer

> On Aug 31, 2017, at 8:33 PM, Jian He  wrote:
> I would like to call a vote for merging yarn-native-services to trunk.

1) Did I miss it or is there no actual end-user documentation on how to 
use this?  I see 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-services/NativeServicesIntro.md,
 but that’s not particularly useful.  It looks like there are daemons that need 
to get started, based on other documentation?  How?  What do I configure? Is 
there a command to use to say “go do native for this job”?  I honestly have no 
idea how to make this do anything because most of the docs appear to be either 
TBD or expect me to read through a ton of JIRAs.  

2) Lots of markdown problems in the NativeServicesDiscovery.md 
document.  This includes things like ‘yarnsite.xml’ (missing a dash.)  Also, 
I’m also confused why it’s called that when the title is YARN DNS, but whatever.

3) The default port for the DNS server should NOT be 53 if typical 
deployments need to specify an alternate port.  Based on the documentation, 
this doesn’t appear to be a fully function DNS server as an admin would expect 
(e.g., BIND, Knot, whatever).  Where’s forwarding? How do I setup notify? Are 
secondaries even supported? etc, etc. In fact:  was this even tested on port 
53? How does this get launched such that it even has access to open port 53?  I 
don’t see any calls to use the secure daemon code in the shell scripts. Is 
there any jsvc voodoo or is it just “run X as root”?

4) Post-merge, yarn usage information is broken.  This is especially 
bad since it doesn’t appear that YarnCommands was ever updated to include the 
new sub-commands.

At this point in time:

-1 on 3.0.0-beta1
-0 on trunk



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14836) multiple versions of maven-clean-plugin in use

2017-09-05 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-14836:
-

 Summary: multiple versions of maven-clean-plugin in use
 Key: HADOOP-14836
 URL: https://issues.apache.org/jira/browse/HADOOP-14836
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0-beta1
Reporter: Allen Wittenauer


hadoop-yarn-ui re-declares maven-clean-plugin with 3.0 while the rest of the 
source tree uses 2.5.  This should get synced up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14835) mvn site build throws SAX errors

2017-09-05 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-14835:
-

 Summary: mvn site build throws SAX errors
 Key: HADOOP-14835
 URL: https://issues.apache.org/jira/browse/HADOOP-14835
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, site
Affects Versions: 3.0.0-beta1
Reporter: Allen Wittenauer
Priority: Critical




Running mvn  install site site:stage -DskipTests -Pdist,src -Preleasedocs,docs 
results in a stack trace when run on a fresh .m2 directory.  It appears to be 
coming from the jdiff doclets in the annotations code.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14830) Write some non-Docker-based instructions on how to build on Mac OS X

2017-09-01 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-14830:
-

 Summary: Write some non-Docker-based instructions on how to build 
on Mac OS X
 Key: HADOOP-14830
 URL: https://issues.apache.org/jira/browse/HADOOP-14830
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, documentation
Affects Versions: 3.0.0-beta1
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer


We should have some decent documentation on how to build on OS X, now that 
almost everything works again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



YARN javadoc failures Re: [DISCUSS] Branches and versions for Hadoop 3

2017-09-01 Thread Allen Wittenauer

> On Aug 28, 2017, at 9:58 AM, Allen Wittenauer <a...@effectivemachines.com> 
> wrote:
>   The automation only goes so far.  At least while investigating Yetus 
> bugs, I've seen more than enough blatant and purposeful ignored errors and 
> warnings that I'm not convinced it will be effective. ("That javadoc compile 
> failure didn't come from my patch!"  Um, yes, yes it did.) PR for features 
> has greatly trumped code correctness for a few years now.


I'm psychic.

Looks like YARN-6877 is crashing JDK8 javadoc.  Maven stops processing 
and errors out before even giving a build error/success. Reverting the patch 
makes things work again. Anyway, Yetus caught it, warned about it continuously, 
but it was still committed.  


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14817) shelldocs fails mvn site

2017-08-31 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-14817.
---
Resolution: Fixed

> shelldocs fails mvn site
> 
>
> Key: HADOOP-14817
> URL: https://issues.apache.org/jira/browse/HADOOP-14817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>    Assignee: Allen Wittenauer
>Priority: Blocker
>
> When exec-maven-plugin calls Apache Yetus 0.5.0 shelldocs, it fails:
> {code}
> [INFO] 
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> /usr/bin/env: python -B: No such file or directory
> [INFO] 
> 
> [INFO] BUILD FAILURE
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: native folder not found in hadoop-common build on Ubuntu

2017-08-31 Thread Allen Wittenauer

Just to close the loop on this a bit ...

Windows always triggers the 'native-win' profile because winutils is 
currently required to actually use Apache Hadoop on that platform.  On other 
platforms, the 'native' profile is optional since their is enough support in 
the JDK to at least do all the basic of tasks.  (It's still HIGHLY recommended, 
however, that the native code get built though.)

It'd probably be a good project for someone to see if modern JDKs (with 
or without additional dependencies) now have enough support to make winutils 
optional.  



> On Aug 31, 2017, at 4:14 PM, Ping Liu  wrote:
> 
> Hi Ravi, John,
> 
> Thanks!  Yeah, it's the first profile.  Now as I tried the build with
> -Pnative, I saw the build failure.  It complains for cmake.
> 
> It's also a requirement specified in BUILDING.txt that John pointed out.
> 
> Thanks!!
> 
> Ping
> 
> 
> 
> On Thu, Aug 31, 2017 at 4:03 PM, Ravi Prakash  wrote:
> 
>> Please use -Pnative profile
>> 
>> On Thu, Aug 31, 2017 at 3:53 PM, Ping Liu  wrote:
>> 
>>> Hi John,
>>> 
>>> Thank you for your quick response.
>>> 
>>> I used
>>> 
>>> mvn clean install -DskipTests
>>> 
>>> I just did a comparison with my Windows build result. winutils is missing
>>> too.
>>> 
>>> So both "native" and "winutils" folders are not generated in target
>>> folder,
>>> although it shows BUILD SUCCESS.
>>> 
>>> Thanks.
>>> 
>>> Ping
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Thu, Aug 31, 2017 at 3:36 PM, John Zhuge  wrote:
>>> 
 Hi Ping,
 
 Thanks for using Hadoop. Linux is Unix-like. Hadoop supports native code
 on Linux. Please read BUILDING.txt in the root of the Hadoop source
>>> tree.
 
 Could you provide the entire Maven command line when you built Hadoop?
 
 On Thu, Aug 31, 2017 at 3:06 PM, Ping Liu 
>>> wrote:
 
> I built hadoop-common on Ubuntu in my VirtualBox.  But in target
>>> folder, I
> didn't find "native" folder that is supposed to contain the generated
>>> JNI
> header files for C.  On my Windows, native folder is found in target.
> 
> As I check the POM file, I found "native build only supported on Mac or
> Unix".  Does this mean native is not supported on Linux?
> 
> Thanks!
> 
> Ping
> 
 
 
 
 --
 John
 
>>> 
>> 
>> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14817) shelldocs fails mvn site

2017-08-29 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-14817:
-

 Summary: shelldocs fails mvn site
 Key: HADOOP-14817
 URL: https://issues.apache.org/jira/browse/HADOOP-14817
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, documentation
Affects Versions: 3.0.0-beta1
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker


When exec-maven-plugin calls Apache Yetus 0.5.0 shelldocs, it fails:

{code}
[INFO] 
[INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
/usr/bin/env: python -B: No such file or directory
[INFO] 
[INFO] BUILD FAILURE
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >