Re: Unassigned Hadoop jiras with patch available

2019-08-01 Thread Wei-Chiu Chuang
I assigned all jiras with patch available that are created since 2019. If
you have jiras that you are actively working on that you can't assign to
yourself, please let me know.

On Wed, Jul 31, 2019 at 3:11 PM Wei-Chiu Chuang  wrote:

> I was told the filter is private. I am sorry.
>
> This one should be good:
> https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2CYARN%2CMAPREDUCE%2CHDDS%2CSUBMARINE)%20AND%20status%20%3D%20%22Patch%20Available%22%20AND%20assignee%20%3D%20EMPTY%20ORDER%20BY%20created%20DESC%2C%20updated%20DESC
>
> On Wed, Jul 31, 2019 at 3:02 PM Wei-Chiu Chuang 
> wrote:
>
>> I am using this jira filter to find jiras with patch available but
>> unassigned.
>>
>> https://issues.apache.org/jira/issues/?filter=12346814=project%20in%20(HADOOP%2C%20HDFS%2CYARN%2CMAPREDUCE%2CHDDS%2CSUBMARINE)%20AND%20status%20%3D%20%22Patch%20Available%22%20AND%20assignee%20%3D%20EMPTY%20ORDER%20BY%20created%20DESC%2C%20updated%20DESC
>>
>> In most cases, these jiras are unassigned because the contributor who
>> posted the patch are the first-timers and do not have the contributor role
>> in the jira. It's very common for those folks to get overlooked.
>>
>> Hadoop PMCs, if you have the JIRA administrator permission, please help
>> grant contributor access to these contributors. You help keep the project
>> more friendly to new-comers.
>>
>> You can do so by going to JIRA --> (upper right, click on the gear next
>> to your profile avatar) --> Projects --> click on the project (say Hadoop
>> HDFS) --> Roles --> View Project Roles --> Add users to a role --> add to
>> Contributor list, or if the Contributor list is full, add to Contributor1
>> list.
>>
>> Or you can go to
>> https://issues.apache.org/jira/plugins/servlet/project-config/HDFS/roles to
>> add the contributor access for HDFS. Similarly goes for Hadoop Common and
>> other sub-projects.
>>
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-08-01 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1215/

[Jul 31, 2019 5:39:03 AM] (github) HDDS-1856. Make required changes for Non-HA 
to use new HA code in OM.
[Jul 31, 2019 7:56:24 AM] (31469764+bshashikant) HDDS-1816: 
ContainerStateMachine should limit number of pending apply
[Jul 31, 2019 2:07:27 PM] (elek) HDDS-1877. hadoop31-mapreduce fails due to 
wrong HADOOP_VERSION
[Jul 31, 2019 4:18:40 PM] (github) HDDS-1849. Implement S3 Complete MPU request 
to use Cache and
[Jul 31, 2019 4:37:53 PM] (arp7) HDDS-1875. Fix failures in 
TestS3MultipartUploadAbortResponse. (#1188)
[Jul 31, 2019 5:11:36 PM] (github) HADOOP-16398. Exports Hadoop metrics to 
Prometheus (#1170)
[Jul 31, 2019 5:24:48 PM] (elgoiri) HDFS-14681. RBF: TestDisableRouterQuota 
failed because port  was




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-tools/hadoop-aws 
   Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.ttlTimeProvider; locked 75% 
of time Unsynchronized access at LocalMetadataStore.java:75% of time 
Unsynchronized access at LocalMetadataStore.java:[line 623] 

Failed junit tests :

   hadoop.ha.TestZKFailoverController 
   hadoop.util.TestReadWriteDiskValidator 
   hadoop.hdfs.server.datanode.TestLargeBlockReport 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem 
   hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1215/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1215/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1215/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1215/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1215/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1215/artifact/out/diff-patch-pylint.txt
  [216K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1215/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1215/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1215/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1215/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1215/artifact/out/xml.txt
  [16K]

   findbugs:

 

Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-08-01 Thread Sunil Govindan
Hi All

INFRA-18777 is closed and github UI has disabled #1 and #3. Only Squash and
Merge is possible.
Could we start using this option (merge from UI) from now onwards ?

- Sunil

On Mon, Jul 22, 2019 at 10:24 AM Bharat Viswanadham
 wrote:

> +1 for squash and merge.
>
> And if we use Github UI, the original author will be shown as the original
> author of the code, not who clicks the squash and merge.
>
> [image: Screen Shot 2019-07-17 at 11.58.51 AM.png]
>
> Like in the screenshot arp7 committed the change from GithubUI, but the
> author is still been shown as the original author "bharatviswa504".
>
> * ef66e4999f3 N - HDDS-1666. Issue in openKey when allocating block.
> (#943) (2 days ago)  
> Thanks,
> Bharat
>
>
>
> On Wed, Jul 17, 2019 at 10:20 AM IƱigo Goiri  wrote:
>
>> +1
>>
>> On Wed, Jul 17, 2019 at 4:17 AM Steve Loughran
>> 
>> wrote:
>>
>> > +1 for squash and merge, with whoever does the merge adding the full
>> commit
>> > message for the logs, with JIRA, contributor(s) etc
>> >
>> > One limit of the github process is that the author of the commit becomes
>> > whoever hit the squash button, not whoever did the code, so it loses the
>> > credit they are due. This is why I'm doing local merges (With some help
>> > from smart-apply-patch). I think I'll have to explore smart-apply-patch
>> to
>> > see if I can do even more with it
>> >
>> >
>> >
>> >
>> > On Wed, Jul 17, 2019 at 7:07 AM Elek, Marton  wrote:
>> >
>> > > Hi,
>> > >
>> > > Github UI (ui!) helps to merge Pull Requests to the proposed branch.
>> > > There are three different ways to do it [1]:
>> > >
>> > > 1. Keep all the different commits from the PR branch and create one
>> > > additional merge commit ("Create a merge commit")
>> > >
>> > > 2. Squash all the commits and commit the change as one patch ("Squash
>> > > and merge")
>> > >
>> > > 3. Keep all the different commits from the PR branch but rebase, merge
>> > > commit will be missing ("Rebase and merge")
>> > >
>> > >
>> > >
>> > > As only the option 2 is compatible with the existing development
>> > > practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
>> > > consensus vote: If no objections withing 3 days, I will ask INFRA to
>> > > disable the options 1 and 3 to make the process less error prone.
>> > >
>> > > Please let me know, what do you think,
>> > >
>> > > Thanks a lot
>> > > Marton
>> > >
>> > > ps: Personally I prefer to merge from local as it enables to sign the
>> > > commits and do a final build before push. But this is a different
>> story,
>> > > this proposal is only about removing the options which are obviously
>> > > risky...
>> > >
>> > > ps2: You can always do any kind of merge / commits from CLI, for
>> example
>> > > to merge a feature branch together with keeping the history.
>> > >
>> > > [1]:
>> > >
>> > >
>> >
>> https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github
>> > >
>> > > -
>> > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>> > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>> > >
>> > >
>> >
>>
>


Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-08-01 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [228K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/400/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [20K]