[jira] [Resolved] (HADOOP-16594) add sm4 crypto to hdfs

2020-01-13 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-16594.
--
Resolution: Duplicate

I'm closing this one as a dup of HDFS-15098. There's a patch available at 
HDFS-15098 and we can move the discussion over there.

> add sm4 crypto to hdfs
> --
>
> Key: HADOOP-16594
> URL: https://issues.apache.org/jira/browse/HADOOP-16594
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.1.1
>Reporter: zZtai
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Guidelines for Code cleanup JIRAs

2020-01-13 Thread Ahmed Hussein
+1
Can we also make sure to add a label for the code cleanup Jiras? At least,
this will make it easy to search and filter jiras.

On Mon, Jan 13, 2020 at 7:24 AM Wei-Chiu Chuang  wrote:

> +1
>
> On Thu, Jan 9, 2020 at 9:33 AM epa...@apache.org 
> wrote:
>
> > There was some discussion on
> > https://issues.apache.org/jira/browse/YARN-9052
> > about concerns surrounding the costs/benefits of code cleanup JIRAs. This
> > email
> > is to get the discussion going within a wider audience.
> >
> > The positive points for code cleanup JIRAs:
> > - Clean up tech debt
> > - Make code more readable
> > - Make code more maintainable
> > - Make code more performant
> >
> > The concerns regarding code cleanup JIRAs are as follows:
> > - If the changes only go into trunk, then contributors and committers
> > trying to
> >  backport to prior releases will have to create and test multiple patch
> > versions.
> > - Some have voiced concerns that code cleanup JIRAs may not be tested as
> >   thoroughly as features and bug fixes because functionality is not
> > supposed to
> >   change.
> > - Any patches awaiting review that are touching the same code will have
> to
> > be
> >   redone, re-tested, and re-reviewed.
> > - JIRAs that are opened for code cleanup and not worked on right away
> tend
> > to
> >   clutter up the JIRA space.
> >
> > Here are my opinions:
> > - Code changes of any kind force a non-trivial amount of overhead for
> other
> >   developers. For code cleanup JIRAs, sometimes the usability,
> > maintainability,
> >   and performance is worth the overhead (as in the case of YARN-9052).
> > - Before opening any JIRA, please always consider whether or not the
> added
> >   usability will outweigh the added pain you are causing other
> developers.
> > - If you believe the benefits outweigh the costs, please backport the
> > changes
> >   yourself to all active lines. My preference is to port all the way back
> > to 2.10.
> > - Please don't run code analysis tools and then open many JIRAs that
> > document
> >   those findings. That activity does not put any thought into this
> > cost-benefit
> >   analysis.
> >
> > Thanks everyone. I'm looking forward to your thoughts. I appreciate all
> > you do
> > for the open source community and it is always a pleasure to work with
> you.
> > -Eric Payne
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
> >
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-01-13 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1380/

[Jan 12, 2020 12:48:39 PM] (snemeth) YARN-10067. Add dry-run feature to FS-CS 
converter tool. Contributed by
[Jan 12, 2020 1:04:15 PM] (snemeth) YARN-9866. u:user2:%primary_group is not 
working as expected.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem 
   hadoop.fs.viewfs.TestViewFsTrash 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.namenode.TestRedudantBlocks 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.TestDeadNodeDetection 
   hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-01-13 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.ipc.TestProtoBufRpc 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.mapreduce.v2.TestSpeculativeExecutionWithMRApp 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/diff-compile-cc-root-jdk1.8.0_232.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/diff-compile-javac-root-jdk1.8.0_232.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_232.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [160K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/566/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   

Re: [DISCUSS] Guidelines for Code cleanup JIRAs

2020-01-13 Thread Wei-Chiu Chuang
+1

On Thu, Jan 9, 2020 at 9:33 AM epa...@apache.org  wrote:

> There was some discussion on
> https://issues.apache.org/jira/browse/YARN-9052
> about concerns surrounding the costs/benefits of code cleanup JIRAs. This
> email
> is to get the discussion going within a wider audience.
>
> The positive points for code cleanup JIRAs:
> - Clean up tech debt
> - Make code more readable
> - Make code more maintainable
> - Make code more performant
>
> The concerns regarding code cleanup JIRAs are as follows:
> - If the changes only go into trunk, then contributors and committers
> trying to
>  backport to prior releases will have to create and test multiple patch
> versions.
> - Some have voiced concerns that code cleanup JIRAs may not be tested as
>   thoroughly as features and bug fixes because functionality is not
> supposed to
>   change.
> - Any patches awaiting review that are touching the same code will have to
> be
>   redone, re-tested, and re-reviewed.
> - JIRAs that are opened for code cleanup and not worked on right away tend
> to
>   clutter up the JIRA space.
>
> Here are my opinions:
> - Code changes of any kind force a non-trivial amount of overhead for other
>   developers. For code cleanup JIRAs, sometimes the usability,
> maintainability,
>   and performance is worth the overhead (as in the case of YARN-9052).
> - Before opening any JIRA, please always consider whether or not the added
>   usability will outweigh the added pain you are causing other developers.
> - If you believe the benefits outweigh the costs, please backport the
> changes
>   yourself to all active lines. My preference is to port all the way back
> to 2.10.
> - Please don't run code analysis tools and then open many JIRAs that
> document
>   those findings. That activity does not put any thought into this
> cost-benefit
>   analysis.
>
> Thanks everyone. I'm looking forward to your thoughts. I appreciate all
> you do
> for the open source community and it is always a pleasure to work with you.
> -Eric Payne
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>