Re: Proposal: Force 'squash and merge' on github UI

2019-03-04 Thread Akira Ajisaka
Thanks Elek for your proposal.

I'm +1 for disabling 'merge' and 'rebase and merge' buttons in the
GitHub repository.

-Akira

On Tue, Mar 5, 2019 at 12:42 AM Elek, Marton  wrote:
>
> I don't know which one is the best approach, personally I prefer to
> merge locally as in that case the commit can be signed by my local key.
>
> Github PR can be closed with adding a "Closes #412" comment to the end
> of the commit message and with this comment the final commit will be
> linked under to original PR>
>
>
> Using the merge button also can be good if we use 'squash and merge' option.
>
> With the simple 'merge' option we would have more complex history
> (additional merge commits) and with 'rebase' we would have multiple
> small commits for one Jira.
>
> I think the 'squash and merge' option is in sync with the existing
> practice and I propose to disable to the two other options to make it
> easier to choose the right option for the "press-to-merge" approach.
>
> What do you think?
> Marton
>
>
> On 3/4/19 12:44 PM, Steve Loughran wrote:
> > thanks
> >
> > I'm just starting to play with/understand the integration, and think we
> > should start worrying about "what makes a good process here"
> >
> > while I like the idea of a "press-to-merge" button, it's not going to do
> > the whitespace stripping on a merge we ought to be doing and it gets signed
> > by the github GPG key, rather than any private key which some but not
> > enough of us use.
> >
> > Similarly: where do discussions go, how best to review, etc, etc.
> >
> > I've got no idea of best practises here. Some experience of the spark
> > process, which has
> >
> > * A template for the PR text which is automatically used to initialize the
> > text
> > * strict use of reviewers demanding everything right (no
> > committer-final-cleanup)
> > * the ability of trusted people to ask jenkins to run tests etc
> >
> > 1. Any other ASF projects to look at?
> > 2. who fancies trying to define a process here on a confluence page?
> >
> >
> >
> >
> > On Mon, Mar 4, 2019 at 8:05 AM Akira Ajisaka  wrote:
> >
> >> This issue was fixed by ASF infra team.
> >> If there are any problems, please let me know.
> >>
> >> Regards,
> >> Akira
> >>
> >> On Mon, Mar 4, 2019 at 3:25 PM Akira Ajisaka  wrote:
> >>>
> >>> Hi folks,
> >>>
> >>> I found github and gitbox are inconsistent and filed
> >>> https://issues.apache.org/jira/browse/INFRA-17947 to fix it.
> >>>
> >>> Regards,
> >>> Akira
> >>
> >> -
> >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >>
> >>
> >
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16162) Remove unused Job Summary Appender configurations from log4j.properties

2019-03-04 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16162.

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0

Committed to trunk. Thanks [~coder_chenzhi]!

> Remove unused Job Summary Appender configurations from log4j.properties
> ---
>
> Key: HADOOP-16162
> URL: https://issues.apache.org/jira/browse/HADOOP-16162
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.3-alpha
>Reporter: Chen Zhi
>Priority: Major
>  Labels: CI, pull-request-available
> Fix For: 3.3.0
>
> Attachments: HADOOP-16162.1.patch
>
>
> The Job Summary Appender (JSA) is introduced in 
> [MAPREDUCE-740|https://issues.apache.org/jira/browse/MAPREDUCE-740] to 
> provide the summary information of the job's runtime. And this appender is 
> only referenced by the logger defined in 
> org.apache.hadoop.mapred.JobInProgress$JobSummary. However, this class has 
> been removed in 
> [MAPREDUCE-4266|https://issues.apache.org/jira/browse/MAPREDUCE-4266] 
> together with other M/R1 files. This appender is no longer used and I guess 
> we can remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16167) "hadoop CLASSFILE" prints error messages on Ubuntu 18

2019-03-04 Thread Daniel Templeton (JIRA)
Daniel Templeton created HADOOP-16167:
-

 Summary: "hadoop CLASSFILE" prints error messages on Ubuntu 18
 Key: HADOOP-16167
 URL: https://issues.apache.org/jira/browse/HADOOP-16167
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.2.0
Reporter: Daniel Templeton


{noformat}
# hadoop org.apache.hadoop.conf.Configuration
/usr/lib/hadoop/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2366: 
HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_USER: bad substitution
/usr/lib/hadoop/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2331: 
HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_USER: bad substitution
/usr/lib/hadoop/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2426: 
HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_OPTS: bad substitution
{noformat}

The issue is a regression in bash 4.4.  See 
[here|http://savannah.gnu.org/support/?109649].  The extraneous output can 
break scripts that read the command output.

According to [~aw]:

{quote}Oh, I think I see the bug.  HADOOP_SUBCMD (and equivalents in yarn, 
hdfs, etc) just needs some special handling when a custom method is being 
called.  For example, there’s no point in checking to see if it should run with 
privileges, so just skip over that.  Probably a few other places too.  
Relatively easy fix.  2 lines of code, maybe.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16165) S3A connector - are multiple SSE-KMS keys supported within same bucket?

2019-03-04 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16165.
-
Resolution: Invalid

this isn't the way to ask questions. Get on the common-dev list & make queries 
there. 

Or even better: run some experiments

> S3A connector - are multiple SSE-KMS keys supported within same bucket?
> ---
>
> Key: HADOOP-16165
> URL: https://issues.apache.org/jira/browse/HADOOP-16165
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: tools
>Reporter: t oo
>Priority: Major
>
> Within a single s3 bucket i have 2 objects:
> s3a://bucketabc/a/b/c/object1
> s3a://bucketabc/a/b/c/object2
> object1 is encrypted with sse-kms (kms key1)
> object2 is encrypted with sse-kms (kms key2)
> The 2 objects are not encrypted with a common kms key! But they are in the 
> same s3 bucket.
>  
> [~ste...@apache.org] - Does the s3a connector support multiple sse-kms keys 
> so that it can read the data (ie want to use hive/spark to read from s3) from 
> diff objects within same bucket when those objects were encrypted with diff 
> keys? .
> [https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/bk_cloud-data-access/content/SSE-KMS-enable.html]
>  
>  
>  fs.s3a.server-side-encryption.key
>  
> arn:aws:kms:us-west-2:360379543683:key/071a86ff-8881-4ba0-9230-95af6d01ca01,
>  arn:aws:kms:us-west-2:360379543683:key/vjsnhdjksd
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: ${!var} In Scripts

2019-03-04 Thread Allen Wittenauer



> On Mar 4, 2019, at 10:00 AM, Daniel Templeton  wrote:
> 
> Do you want to file a JIRA for it, or shall I?

Given I haven’t done any Hadoop work in months and months …



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16166) TestRawLocalFileSystemContract fails with build Docker container running on Mac

2019-03-04 Thread Matt Foley (JIRA)
Matt Foley created HADOOP-16166:
---

 Summary: TestRawLocalFileSystemContract fails with build Docker 
container running on Mac
 Key: HADOOP-16166
 URL: https://issues.apache.org/jira/browse/HADOOP-16166
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.3.0
Reporter: Matt Foley


The Mac has a case-insensitive filesystem.  When using the recommended build 
Docker container via `start-build-env.sh`, the container attaches to the Mac FS 
to share the local git repository for Hadoop.  Which is very nice and 
convenient.

This means the TestRawLocalFileSystemContract::testFilesystemIsCaseSensitive() 
test case (which is inherited from FileSystemContractBaseTest) should be 
skipped.  It fails to be skipped, and therefore throws a Unit Test failure, 
because @Override TestRawLocalFileSystemContract::filesystemIsCaseSensitive() 
does not take into account the possibility of a Linux OS mounting a MacOS 
filesystem.

The fix would extend 
TestRawLocalFileSystemContract::filesystemIsCaseSensitive() to recognize this 
case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16165) S3A connector - are multiple SSE-KMS keys supported within same bucket?

2019-03-04 Thread t oo (JIRA)
t oo created HADOOP-16165:
-

 Summary: S3A connector - are multiple SSE-KMS keys supported 
within same bucket?
 Key: HADOOP-16165
 URL: https://issues.apache.org/jira/browse/HADOOP-16165
 Project: Hadoop Common
  Issue Type: Wish
  Components: tools
Reporter: t oo


Within a single s3 bucket i have 2 objects:

s3a://bucketabc/a/b/c/object1

s3a://bucketabc/a/b/c/object2

object1 is encrypted with sse-kms (key1)

object2 is encrypted with sse-kms (key2)

The 2 objects are not encrypted with a common key! But they are in the same s3 
bucket.

 

[~ste...@apache.org] - Does the s3a connector support multiple sse-kms keys so 
that it can read the data (ie want to use hive/spark to read from s3) from diff 
objects within same bucket when those objects were encrypted with diff keys? .

[https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/bk_cloud-data-access/content/SSE-KMS-enable.html]

 

  fs.s3a.server-side-encryption.key
  
arn:aws:kms:us-west-2:360379543683:key/071a86ff-8881-4ba0-9230-95af6d01ca01




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16164) S3aDelegationTokens to add accessor for tests to get at the token binding

2019-03-04 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16164:
---

 Summary: S3aDelegationTokens to add accessor for tests to get at 
the token binding
 Key: HADOOP-16164
 URL: https://issues.apache.org/jira/browse/HADOOP-16164
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


For testing, it turns out to be useful to get at the current token binding in 
the S3ADelegationTokens instance of a filesystem.

provide an accessor, tagged as for testing only



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: ${!var} In Scripts

2019-03-04 Thread Daniel Templeton

Do you want to file a JIRA for it, or shall I?

Daniel

On 3/4/19 9:55 AM, Allen Wittenauer wrote:



On Mar 4, 2019, at 9:33 AM, Daniel Templeton  wrote:

Thanks!  That's not even close to what the docs suggest it does--no idea what's 
up with that.

It does. Here’s the paragraph:

"If the first character of parameter is an exclamation point (!), a level of 
variable indirection is introduced. Bash uses the value of the variable formed from 
the rest of parameter as the name of the variable; this variable is then expanded 
and that value is used in the rest of the substitution, rather than the value of 
parameter itself. This is known as indirect expansion. The exceptions to this are 
the expansions of ${!prefix*} and ${!name[@]} described below. The exclamation point 
must immediately follow the left brace in order to introduce indirection.”

There’s a whole section on bash indirect references in the ABS as well. 
(Although I think most of the examples there still use \$$foo syntax with a 
note that it was replaced with ${!foo} syntax. lol.)

For those playing at home, the hadoop shell code uses them almost 
entirely for utility functions in order to reduce the amount of code that would 
be needed to processes the ridiculous amount of duplicated env vars (e.g., 
HADOOP_HOME vs. HDFS_HOME vs YARN_HOME vs …).


This issue only shows up if the user uses the hadoop command to run an arbitrary class 
not in the default package, e.g. "hadoop org.apache.hadoop.conf.Configuration". 
 We've been quietly allowing that misuse forever.  Unfortunately, treating CLI output as 
an API means we can't change that behavior in a minor.  We could, however, deprecate it 
and add a warning when it's used.  I think that would cover us sufficiently if someone 
trips on the Ubuntu 18 regression.

Thoughts?

Oh, I think I see the bug.  HADOOP_SUBCMD (and equivalents in yarn, 
hdfs, etc) just needs some special handling when a custom method is being 
called.  For example, there’s no point in checking to see if it should run with 
privileges, so just skip over that.  Probably a few other places too.  
Relatively easy fix.  2 lines of code, maybe.




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: ${!var} In Scripts

2019-03-04 Thread Allen Wittenauer



> On Mar 4, 2019, at 9:33 AM, Daniel Templeton  wrote:
> 
> Thanks!  That's not even close to what the docs suggest it does--no idea 
> what's up with that.

It does. Here’s the paragraph:

"If the first character of parameter is an exclamation point (!), a level of 
variable indirection is introduced. Bash uses the value of the variable formed 
from the rest of parameter as the name of the variable; this variable is then 
expanded and that value is used in the rest of the substitution, rather than 
the value of parameter itself. This is known as indirect expansion. The 
exceptions to this are the expansions of ${!prefix*} and ${!name[@]} described 
below. The exclamation point must immediately follow the left brace in order to 
introduce indirection.”

There’s a whole section on bash indirect references in the ABS as well. 
(Although I think most of the examples there still use \$$foo syntax with a 
note that it was replaced with ${!foo} syntax. lol.)

For those playing at home, the hadoop shell code uses them almost 
entirely for utility functions in order to reduce the amount of code that would 
be needed to processes the ridiculous amount of duplicated env vars (e.g., 
HADOOP_HOME vs. HDFS_HOME vs YARN_HOME vs …).

> This issue only shows up if the user uses the hadoop command to run an 
> arbitrary class not in the default package, e.g. "hadoop 
> org.apache.hadoop.conf.Configuration".  We've been quietly allowing that 
> misuse forever.  Unfortunately, treating CLI output as an API means we can't 
> change that behavior in a minor.  We could, however, deprecate it and add a 
> warning when it's used.  I think that would cover us sufficiently if someone 
> trips on the Ubuntu 18 regression.
> 
> Thoughts?

Oh, I think I see the bug.  HADOOP_SUBCMD (and equivalents in yarn, 
hdfs, etc) just needs some special handling when a custom method is being 
called.  For example, there’s no point in checking to see if it should run with 
privileges, so just skip over that.  Probably a few other places too.  
Relatively easy fix.  2 lines of code, maybe.


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: ${!var} In Scripts

2019-03-04 Thread Daniel Templeton
Thanks!  That's not even close to what the docs suggest it does--no idea 
what's up with that.  With your example, I was able to figure out 
exactly what the issue is.  On Ubuntu 18/bash 4.4, dot is rejected in 
the name of the variable to substitute, which is sane in principle as 
dots aren't allowed in variable names, but it's a regression from Ubuntu 
16/bash 4.3.


For example:

% docker run -ti ubuntu:16.04 /bin/bash
root@9a36ac04f2ff:/# k=l.m
root@9a36ac04f2ff:/# echo ${!k}

root@9a36ac04f2ff:/# exit
% docker run -ti ubuntu:18.04 /bin/bash
root@36ce0eb1d846:/# k=l.m
root@36ce0eb1d846:/# echo ${!k}
bash: l.m: bad substitution
root@36ce0eb1d846:/# exit

This issue only shows up if the user uses the hadoop command to run an 
arbitrary class not in the default package, e.g. "hadoop 
org.apache.hadoop.conf.Configuration".  We've been quietly allowing that 
misuse forever.  Unfortunately, treating CLI output as an API means we 
can't change that behavior in a minor.  We could, however, deprecate it 
and add a warning when it's used.  I think that would cover us 
sufficiently if someone trips on the Ubuntu 18 regression.


Thoughts?

Daniel

On 3/1/19 3:52 PM, Allen Wittenauer wrote:



On Mar 1, 2019, at 3:04 PM, Daniel Templeton  wrote:

There are a bunch of uses of the bash syntax, "${!var}", in the Hadoop scripts. 
 Can anyone explain to me what that syntax was supposed to achieve?


#!/usr/bin/env bash

j="hi"
m="bye"
k=j
echo ${!k}
k=m
echo ${!k}



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16163) NPE in setup/teardown of ITestAbfsDelegationTokens

2019-03-04 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16163:
---

 Summary: NPE in setup/teardown of ITestAbfsDelegationTokens
 Key: HADOOP-16163
 URL: https://issues.apache.org/jira/browse/HADOOP-16163
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Steve Loughran






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16162) Remove unused Job Summary Appender configurations from log4j.properties

2019-03-04 Thread Chen Zhi (JIRA)
Chen Zhi created HADOOP-16162:
-

 Summary: Remove unused Job Summary Appender configurations from 
log4j.properties
 Key: HADOOP-16162
 URL: https://issues.apache.org/jira/browse/HADOOP-16162
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.3-alpha
Reporter: Chen Zhi
 Attachments: diff

The Job Summary Appender (JSA) is introduced in 
[MAPREDUCE-740|https://issues.apache.org/jira/browse/MAPREDUCE-740] to provide 
the summary information of the job's runtime. And this appender is only 
referenced by the logger defined in 
org.apache.hadoop.mapred.JobInProgress$JobSummary. However, this class has been 
removed in 
[MAPREDUCE-4266|https://issues.apache.org/jira/browse/MAPREDUCE-4266] together 
with other M/R1 files. This appender is no longer used and I guess we can 
remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-03-04 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits 
   hadoop.hdfs.server.datanode.TestDataNodeMetrics 
   
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisherForV2 
   
hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestMRTimelineEventHandling 
   hadoop.ozone.freon.TestFreonWithDatanodeFastRestart 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/diff-patch-pylint.txt
  [144K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/xml.txt
  [16K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/branch-findbugs-hadoop-submarine_hadoop-submarine-yarnservice-runtime.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [168K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [336K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [104K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [88K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/patch-unit-hadoop-hdds_container-service.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1065/artifact/out/patch-unit-hadoop-ozone_common.txt
  [8.0K]
   

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-03-04 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Dead store to state in 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At 
FSImageFormatPBINode.java:org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At FSImageFormatPBINode.java:[line 623] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.security.TestKDiag 
   hadoop.ipc.TestRpcServerHandoff 
   hadoop.util.TestDiskChecker 
   hadoop.ipc.TestCallQueueManager 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.namenode.TestNameNodeMXBean 
   hadoop.tracing.TestTraceAdmin 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/artifact/out/xml.txt
  [20K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/250/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   

Re: Github and GitBox are inconsistent

2019-03-04 Thread Steve Loughran
Update, you can use the github web editor to remove whitespace, as done in
https://github.com/apache/hadoop/pull/539

but as that's a hadoop-aws module, it still needs a client pull and retest.

If you do a git pull from your own private branch which formed the PR,
those edits come back down, signed with the github GPG key & with me listed
as the --author.

* gpg: Signature made Mon  4 Mar 11:54:27 2019 GMT
| gpg:using RSA key 4AEE18F83AFDEB23
| gpg: Good signature from "GitHub (web-flow commit signing) <
nore...@github.com>" [full]
| 9ed42911c18 - (HEAD -> s3/HADOOP-16109-parquet-eof-s3a-seek,
github/s3/HADOOP-16109-parquet-eof-s3a-seek) ITestS3AContractSeek.java (4
minutes ago)
* gpg: Signature made Thu 28 Feb 19:44:03 2019 GMT
| gpg:using RSA key 38237EE425050285077DB57AD22CF846DBB162A0
| gpg: Good signature from "Steve Loughran (ASF code sign key  - 2018) <
ste...@apache.org>" [ultimate]
| gpg: aka "[jpeg image of size 8070]" [ultimate]
| cada0c26671 - HADOOP-16109. Use parameterized tests for the s3a seek
contract, with all three seek options checked (4 days ago)


This does mean that for PRs where the submitter has set the "allow edits"
button, it would be possible for the reviewer to fix the whitespace before
the commit, but that still sucks. Now, if we have hadoop-yetus do the fix

FWIW, AW recommends yetus's smart apply dev-support/bin/smart-apply-patch

https://effectivemachines.com/2018/05/23/applying-patches-smartly-using-apache-yetus/

I should be able to locally D/L and apply the patch, with whitespace fix,
from:

smart-apply-patch --project=hadoop --committer --gpg-sign GH:539

If this works well, maybe should just mandate that this is the mechanism
for PRs to be merged in.


On Mon, Mar 4, 2019 at 11:44 AM Steve Loughran  wrote:

> thanks
>
> I'm just starting to play with/understand the integration, and think we
> should start worrying about "what makes a good process here"
>
> while I like the idea of a "press-to-merge" button, it's not going to do
> the whitespace stripping on a merge we ought to be doing and it gets signed
> by the github GPG key, rather than any private key which some but not
> enough of us use.
>
> Similarly: where do discussions go, how best to review, etc, etc.
>
> I've got no idea of best practises here. Some experience of the spark
> process, which has
>
> * A template for the PR text which is automatically used to initialize the
> text
> * strict use of reviewers demanding everything right (no
> committer-final-cleanup)
> * the ability of trusted people to ask jenkins to run tests etc
>
> 1. Any other ASF projects to look at?
> 2. who fancies trying to define a process here on a confluence page?
>
>
>
>
> On Mon, Mar 4, 2019 at 8:05 AM Akira Ajisaka  wrote:
>
>> This issue was fixed by ASF infra team.
>> If there are any problems, please let me know.
>>
>> Regards,
>> Akira
>>
>> On Mon, Mar 4, 2019 at 3:25 PM Akira Ajisaka  wrote:
>> >
>> > Hi folks,
>> >
>> > I found github and gitbox are inconsistent and filed
>> > https://issues.apache.org/jira/browse/INFRA-17947 to fix it.
>> >
>> > Regards,
>> > Akira
>>
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>>


Re: Github and GitBox are inconsistent

2019-03-04 Thread Steve Loughran
thanks

I'm just starting to play with/understand the integration, and think we
should start worrying about "what makes a good process here"

while I like the idea of a "press-to-merge" button, it's not going to do
the whitespace stripping on a merge we ought to be doing and it gets signed
by the github GPG key, rather than any private key which some but not
enough of us use.

Similarly: where do discussions go, how best to review, etc, etc.

I've got no idea of best practises here. Some experience of the spark
process, which has

* A template for the PR text which is automatically used to initialize the
text
* strict use of reviewers demanding everything right (no
committer-final-cleanup)
* the ability of trusted people to ask jenkins to run tests etc

1. Any other ASF projects to look at?
2. who fancies trying to define a process here on a confluence page?




On Mon, Mar 4, 2019 at 8:05 AM Akira Ajisaka  wrote:

> This issue was fixed by ASF infra team.
> If there are any problems, please let me know.
>
> Regards,
> Akira
>
> On Mon, Mar 4, 2019 at 3:25 PM Akira Ajisaka  wrote:
> >
> > Hi folks,
> >
> > I found github and gitbox are inconsistent and filed
> > https://issues.apache.org/jira/browse/INFRA-17947 to fix it.
> >
> > Regards,
> > Akira
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


[jira] [Resolved] (HADOOP-16160) TestAdlFileSystemContractLive fails in branch-2.8

2019-03-04 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16160.

Resolution: Duplicate

Cherry-picked HADOOP-14170 to branch-2.8. Closing.

> TestAdlFileSystemContractLive fails in branch-2.8
> -
>
> Key: HADOOP-16160
> URL: https://issues.apache.org/jira/browse/HADOOP-16160
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> Running org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive
> Tests run: 30, Failures: 0, Errors: 30, Skipped: 0, Time elapsed: 0.313 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive
> testWorkingDirectory(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive)
>   Time elapsed: 0.134 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:518)
>   at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.path(FileSystemContractBaseTest.java:476)
>   at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.tearDown(FileSystemContractBaseTest.java:56)
>   at 
> org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.tearDown(TestAdlFileSystemContractLive.java:49)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16160) TestAdlFileSystemContractLive fails in branch-2.8

2019-03-04 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16160:
--

 Summary: TestAdlFileSystemContractLive fails in branch-2.8
 Key: HADOOP-16160
 URL: https://issues.apache.org/jira/browse/HADOOP-16160
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira Ajisaka


{noformat}
Running org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive
Tests run: 30, Failures: 0, Errors: 30, Skipped: 0, Time elapsed: 0.313 sec <<< 
FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive
testWorkingDirectory(org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive)
  Time elapsed: 0.134 sec  <<< ERROR!
java.lang.NullPointerException: null
at org.apache.hadoop.fs.Path.makeQualified(Path.java:518)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.path(FileSystemContractBaseTest.java:476)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.tearDown(FileSystemContractBaseTest.java:56)
at 
org.apache.hadoop.fs.adl.live.TestAdlFileSystemContractLive.tearDown(TestAdlFileSystemContractLive.java:49)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16060) Do not use dist.apache.org for download link

2019-03-04 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16060.

Resolution: Duplicate

> Do not use dist.apache.org for download link
> 
>
> Key: HADOOP-16060
> URL: https://issues.apache.org/jira/browse/HADOOP-16060
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: site
>Reporter: Akira Ajisaka
>Priority: Major
>
> Please see http://www.apache.org/dev/release-download-pages.html#links for 
> the detail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Github and GitBox are inconsistent

2019-03-04 Thread Akira Ajisaka
This issue was fixed by ASF infra team.
If there are any problems, please let me know.

Regards,
Akira

On Mon, Mar 4, 2019 at 3:25 PM Akira Ajisaka  wrote:
>
> Hi folks,
>
> I found github and gitbox are inconsistent and filed
> https://issues.apache.org/jira/browse/INFRA-17947 to fix it.
>
> Regards,
> Akira

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org