Re: [DISCUSS] HADOOP-13603 - Remove package line length checkstyle rule

2016-10-20 Thread Andrew Wang
I don't think anything has really changed since we had this discussion in
2015 [1]. Github and gerrit and IDEs existed then too, and we decided to
leave it at 80 characters due to split screens and readability.

I personally still like 80 chars for these same reasons.

[1]
https://lists.apache.org/thread.html/3e1785cbbe14dcab9bb970fa0f534811cfe00795a8cd1100580f27dc@1430849118@%3Ccommon-dev.hadoop.apache.org%3E

On Thu, Oct 20, 2016 at 7:46 AM, John Zhuge  wrote:

> With HADOOP-13411, it is possible to suppress any checkstyle warning with
> an annotation.
>
> In this case, just add the following annotation before the class or method:
>
> @SuppressWarnings("checkstyle:linelength")
>
> However this will not work if the warning is widespread in different
> classes or methods.
>
> Thanks,
> John Zhuge
>
> John Zhuge
> Software Engineer, Cloudera
>
> On Thu, Oct 20, 2016 at 3:22 AM, Steve Loughran 
> wrote:
>
> >
> > > On 19 Oct 2016, at 14:52, Shane Kumpf 
> > wrote:
> > >
> > > All,
> > >
> > > I would like to start a discussion on the possibility of removing the
> > > package line length checkstyle rule (HADOOP-13603
> > > ).
> > >
> > > While working on various aspects of YARN container runtimes, all of my
> > > pre-commit jobs would fail as the package line length exceeded 80
> > > characters. While I'm all for automated checks, I feel checks need to
> be
> > > enforceable and provide value. Fixing the package line length error
> does
> > > not improve readability or maintainability of the code, and IMO should
> be
> > > removed.
> > >
> >
> > I kind of agree here
> >
> > working on other projects with wider line lenghts (100, 120) means that
> > you find going back to 80 chars so restrictive; and as we adopt java 8
> code
> > with closures, your nesting gets even more complex. Trying to fit things
> > into 80 char width often adds lots of line breaks which can make the code
> > messier than if it need be.
> >
> > the argument against wider lines has historically been "helped
> > side-by-side" patch reviews. But we have so much patch review software
> > these days: github, gerrit, IDEs. i don't think we need to stay in
> > punched-card width code limits just because it worked with a review
> process
> > of 6+ years ago
> >
> >
> > > While on this topic, are there other automated checks that are
> difficult
> > to
> > > enforce or you feel are not providing value (perhaps the 150 line
> method
> > > length)?
> > >
> >
> > I like that as a warning sign of complexity...it's not a hard veto after
> > all.
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
> >
>


[jira] [Resolved] (HDFS-11017) dfsadmin set/clrSpaceQuota fail to recognize StorageType option

2016-10-20 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou resolved HDFS-11017.
--
Resolution: Invalid

It seems there's a flaky error in my script. It's not an issue anymore, thank 
you [~xyao] for the check.

> dfsadmin set/clrSpaceQuota fail to recognize StorageType option
> ---
>
> Key: HDFS-11017
> URL: https://issues.apache.org/jira/browse/HDFS-11017
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>  Labels: cli
>
> dfsadmin setSpaceQuota or clrSpaceQuota don't recognize valid StorageType 
> options, such as DISK or SSD, however, It's been supported by DFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer

2016-10-20 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou resolved HDFS-11038.
--
Resolution: Implemented

As explained, it's already implemented in HDFS-9462 patch. Closed this ticket.

> DiskBalancer: support running multiple commands under one setup of disk 
> balancer
> 
>
> Key: HDFS-11038
> URL: https://issues.apache.org/jira/browse/HDFS-11038
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Disk balancer follows/reuses one rule designed by HDFS balancer, that is, 
> only one instance is allowed to run at the same time. This is correct in 
> production system to avoid any inconsistencies, but it's not ideal to write 
> and run unit tests. For example, it should be allowed run plan, execute, scan 
> commands under one setup of disk balancer. One instance rule will throw 
> exception by complaining 'Another instance is running'. In such a case, 
> there's no way to do a full life cycle tests which involves a sequence of 
> commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11040) Add documentation for HDFS-9820 distcp improvement

2016-10-20 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-11040:


 Summary: Add documentation for HDFS-9820 distcp improvement
 Key: HDFS-11040
 URL: https://issues.apache.org/jira/browse/HDFS-11040
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: distcp
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2016-10-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/130/

[Oct 19, 2016 4:45:23 PM] (sjlee) YARN-5561. [Atsv2] : Support for ability to 
retrieve
[Oct 20, 2016 12:20:07 AM] (arp) HDFS-10752. Several log 
refactoring/improvement suggestion in HDFS.
[Oct 20, 2016 12:37:54 AM] (yzhang) HDFS-9820. Improve distcp to support 
efficient restore to an earlier
[Oct 20, 2016 5:11:18 AM] (brahma) HDFS-11025. TestDiskspaceQuotaUpdate fails 
in trunk due to Bind




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestWriteReadStripedFile 
   hadoop.hdfs.TestWriteRead 
   hadoop.hdfs.TestAclsEndToEnd 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.timelineservice.storage.common.TestRowKeys 
   hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters 
   hadoop.yarn.server.timelineservice.storage.common.TestSeparator 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.cli.TestLogsCLI 
   hadoop.yarn.client.api.impl.TestNMClient 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/130/artifact/out/patch-compile-root.txt
  [312K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/130/artifact/out/patch-compile-root.txt
  [312K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/130/artifact/out/patch-compile-root.txt
  [312K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/130/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [200K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/130/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/130/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/130/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/130/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [72K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/130/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   

[jira] [Resolved] (HDFS-11019) Inconsistent number of corrupt replicas if a corrupt replica is reported multiple times

2016-10-20 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-11019.

Resolution: Duplicate

I am pretty sure this is a dup of HDFS-9958 . Thanks [~kshukla] for confirming 
this!

> Inconsistent number of corrupt replicas if a corrupt replica is reported 
> multiple times
> ---
>
> Key: HDFS-11019
> URL: https://issues.apache.org/jira/browse/HDFS-11019
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
> Environment: CDH5.7.2 
>Reporter: Wei-Chiu Chuang
> Attachments: HDFS-11019.test.patch
>
>
> While investigating a block corruption issue, I found the following warning 
> message in the namenode log:
> {noformat}
> (a client reports a block replica is corrupt)
> 2016-10-12 10:07:37,166 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1073803461 added as corrupt on 
> 10.0.0.63:50010 by /10.0.0.62  because client machine reported it
> 2016-10-12 10:07:37,166 INFO BlockStateChange: BLOCK* invalidateBlock: 
> blk_1073803461_74513(stored=blk_1073803461_74553) on 10.0.0.63:50010
> 2016-10-12 10:07:37,166 INFO BlockStateChange: BLOCK* InvalidateBlocks: add 
> blk_1073803461_74513 to 10.0.0.63:50010
> (another client reports a block replica is corrupt)
> 2016-10-12 10:07:37,728 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1073803461 added as corrupt on 
> 10.0.0.63:50010 by /10.0.0.64  because client machine reported it
> 2016-10-12 10:07:37,728 INFO BlockStateChange: BLOCK* invalidateBlock: 
> blk_1073803461_74513(stored=blk_1073803461_74553) on 10.0.0.63:50010
> (ReplicationMonitor thread kicks in to invalidate the replica and add a new 
> one)
> 2016-10-12 10:07:37,888 INFO BlockStateChange: BLOCK* ask 10.0.0.56:50010 to 
> replicate blk_1073803461_74553 to datanode(s) 10.0.0.63:50010
> 2016-10-12 10:07:37,888 INFO BlockStateChange: BLOCK* BlockManager: ask 
> 10.0.0.63:50010 to delete [blk_1073803461_74513]
> (the two maps are inconsistent)
> 2016-10-12 10:08:00,335 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Inconsistent 
> number of corrupt replicas for blk_1073803461_74553 blockMap has 0 but 
> corrupt replicas map has 1
> {noformat}
> It seems that when a corrupt block replica is reported twice, blockMap 
> corrupt and corrupt replica map becomes inconsistent.
> Looking at the log, I suspect the bug is in 
> {{BlockManager#removeStoredBlock}}. When a corrupt replica is reported, 
> BlockManager removes the block from blocksMap. If the block is already 
> removed (that is, the corrupt replica is reported twice), return; Otherwise 
> (that is, the corrupt replica is reported the first time), remove the block 
> from corruptReplicasMap (The block is added into corruptReplicasMap in 
> BlockerManager#markBlockAsCorrupt) Therefore, after the second corruption 
> report, the corrupt replica is removed from blocksMap, but the one in 
> corruptReplicasMap is not removed.
> I can’t tell what’s the impact that they are inconsistent. But I feel it's a 
> good idea to fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-10423) Increase default value of httpfs maxHttpHeaderSize

2016-10-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reopened HDFS-10423:
--

> Increase default value of httpfs maxHttpHeaderSize
> --
>
> Key: HDFS-10423
> URL: https://issues.apache.org/jira/browse/HDFS-10423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.4, 3.0.0-alpha1
>Reporter: Nicolae Popa
>Assignee: Nicolae Popa
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10423.01.patch, HDFS-10423.02.patch, 
> HDFS-10423.branch-2.patch, testing-after-HDFS-10423.txt, 
> testing-after-HDFS-10423_withCustomHeader4.txt, 
> testing-before-HDFS-10423.txt
>
>
> The Tomcat default value of maxHttpHeaderSize is 8k, which is too low for 
> certain Hadoop workloads in kerberos enabled environments. This JIRA will to 
> change it to 65536 in server.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] HADOOP-13603 - Remove package line length checkstyle rule

2016-10-20 Thread John Zhuge
With HADOOP-13411, it is possible to suppress any checkstyle warning with
an annotation.

In this case, just add the following annotation before the class or method:

@SuppressWarnings("checkstyle:linelength")

However this will not work if the warning is widespread in different
classes or methods.

Thanks,
John Zhuge

John Zhuge
Software Engineer, Cloudera

On Thu, Oct 20, 2016 at 3:22 AM, Steve Loughran 
wrote:

>
> > On 19 Oct 2016, at 14:52, Shane Kumpf 
> wrote:
> >
> > All,
> >
> > I would like to start a discussion on the possibility of removing the
> > package line length checkstyle rule (HADOOP-13603
> > ).
> >
> > While working on various aspects of YARN container runtimes, all of my
> > pre-commit jobs would fail as the package line length exceeded 80
> > characters. While I'm all for automated checks, I feel checks need to be
> > enforceable and provide value. Fixing the package line length error does
> > not improve readability or maintainability of the code, and IMO should be
> > removed.
> >
>
> I kind of agree here
>
> working on other projects with wider line lenghts (100, 120) means that
> you find going back to 80 chars so restrictive; and as we adopt java 8 code
> with closures, your nesting gets even more complex. Trying to fit things
> into 80 char width often adds lots of line breaks which can make the code
> messier than if it need be.
>
> the argument against wider lines has historically been "helped
> side-by-side" patch reviews. But we have so much patch review software
> these days: github, gerrit, IDEs. i don't think we need to stay in
> punched-card width code limits just because it worked with a review process
> of 6+ years ago
>
>
> > While on this topic, are there other automated checks that are difficult
> to
> > enforce or you feel are not providing value (perhaps the 150 line method
> > length)?
> >
>
> I like that as a warning sign of complexity...it's not a hard veto after
> all.
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-10-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/

[Oct 19, 2016 4:45:23 PM] (sjlee) YARN-5561. [Atsv2] : Support for ability to 
retrieve
[Oct 20, 2016 12:20:07 AM] (arp) HDFS-10752. Several log 
refactoring/improvement suggestion in HDFS.
[Oct 20, 2016 12:37:54 AM] (yzhang) HDFS-9820. Improve distcp to support 
efficient restore to an earlier
[Oct 20, 2016 5:11:18 AM] (brahma) HDFS-11025. TestDiskspaceQuotaUpdate fails 
in trunk due to Bind




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-kms 
   Exception is caught when Exception is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map) At KMS.java:is not 
thrown in org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map) At 
KMS.java:[line 169] 
   Exception is caught when Exception is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.generateEncryptedKeys(String, 
String, int) At KMS.java:is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.generateEncryptedKeys(String, 
String, int) At KMS.java:[line 501] 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeLifeline 
   hadoop.hdfs.qjournal.client.TestQJMWithFaults 
   
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.cli.TestLogsCLI 
   hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/whitespace-tabs.txt
  [1.3M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/branch-findbugs-hadoop-common-project_hadoop-kms-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [148K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/200/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [16K]

   asflicense:

   

Re: [DISCUSS] HADOOP-13603 - Remove package line length checkstyle rule

2016-10-20 Thread Steve Loughran

> On 19 Oct 2016, at 14:52, Shane Kumpf  wrote:
> 
> All,
> 
> I would like to start a discussion on the possibility of removing the
> package line length checkstyle rule (HADOOP-13603
> ).
> 
> While working on various aspects of YARN container runtimes, all of my
> pre-commit jobs would fail as the package line length exceeded 80
> characters. While I'm all for automated checks, I feel checks need to be
> enforceable and provide value. Fixing the package line length error does
> not improve readability or maintainability of the code, and IMO should be
> removed.
> 

I kind of agree here

working on other projects with wider line lenghts (100, 120) means that you 
find going back to 80 chars so restrictive; and as we adopt java 8 code with 
closures, your nesting gets even more complex. Trying to fit things into 80 
char width often adds lots of line breaks which can make the code messier than 
if it need be.

the argument against wider lines has historically been "helped side-by-side" 
patch reviews. But we have so much patch review software these days: github, 
gerrit, IDEs. i don't think we need to stay in punched-card width code limits 
just because it worked with a review process of 6+ years ago


> While on this topic, are there other automated checks that are difficult to
> enforce or you feel are not providing value (perhaps the 150 line method
> length)?
> 

I like that as a warning sign of complexity...it's not a hard veto after all.

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org