[jira] [Created] (HDFS-14903) Update access time in toCompleteFile

2019-10-10 Thread lihanran (Jira)
lihanran created HDFS-14903:
---

 Summary: Update access time in toCompleteFile
 Key: HDFS-14903
 URL: https://issues.apache.org/jira/browse/HDFS-14903
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: lihanran






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-10-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/471/

[Oct 10, 2019 4:07:18 PM] (xkrogen) HDFS-14162. [SBN read] Allow Balancer to 
work with Observer node. Add a
[Oct 10, 2019 4:09:50 PM] (ekrogen) HDFS-14245. [SBN read] Enable 
ObserverReadProxyProvider to work with
[Oct 10, 2019 8:29:30 PM] (cliang) HDFS-14509. DN throws InvalidToken due to 
inequality of password when

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Resolved] (HDDS-1230) Update OzoneServiceProvider in s3 gateway to handle OM ha

2019-10-10 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1230.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

Fixed as part of HDDS-2019.

> Update OzoneServiceProvider in s3 gateway to handle OM ha
> -
>
> Key: HDDS-1230
> URL: https://issues.apache.org/jira/browse/HDDS-1230
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Priority: Major
> Fix For: 0.5.0
>
>
> Update OzoneServiceProvider in s3 gateway to handle OM ha



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Hadoop 2.10.0 release plan

2019-10-10 Thread Jonathan Hung
Hi folks, as of now all 2.10.0 blockers have been resolved [1]. So I'll
start the release process soon (cutting branches, updating target versions,
etc).

[1] https://issues.apache.org/jira/issues/?filter=12346975

Jonathan Hung


On Mon, Aug 26, 2019 at 10:19 AM Jonathan Hung  wrote:

> Hi folks,
>
> As discussed previously (e.g. [1], [2]) we'd like to do a 2.10.0 release
> soon. Some features/big-items we're targeting for this release:
>
>- YARN resource types/GPU support (YARN-8200
>)
>- Selective wire encryption (HDFS-13541
>)
>- Rolling upgrade support from 2.x to 3.x (e.g. HDFS-14509
>)
>
> Per [3] sounds like there's concern around upgrading dependencies as well.
>
> We created a public jira filter here (
> https://issues.apache.org/jira/issues/?filter=12346975) marking all
> blockers for 2.10.0 release. If you have other jiras that should be 2.10.0
> blockers, please mark "Target Version/s" as "2.10.0" and add label
> "release-blocker" so we can track it through this filter.
>
> We're targeting a release at end of September.
>
> Please share any thoughts you have about this. Thanks!
>
> [1] https://www.mail-archive.com/yarn-dev@hadoop.apache.org/msg29461.html
> [2]
> https://www.mail-archive.com/mapreduce-dev@hadoop.apache.org/msg21293.html
> [3] https://www.mail-archive.com/yarn-dev@hadoop.apache.org/msg33440.html
>
>
> Jonathan Hung
>


[jira] [Resolved] (HDDS-1986) Fix listkeys API

2019-10-10 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1986.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2282) scmcli pipeline list command throws NullPointerException

2019-10-10 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HDDS-2282:


 Summary: scmcli pipeline list command throws NullPointerException
 Key: HDDS-2282
 URL: https://issues.apache.org/jira/browse/HDDS-2282
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Nilotpal Nandi
Assignee: Xiaoyu Yao


ozone scmcli pipeline list
{noformat}
java.lang.NullPointerException
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
at 
org.apache.hadoop.hdds.scm.XceiverClientManager.(XceiverClientManager.java:98)
at 
org.apache.hadoop.hdds.scm.XceiverClientManager.(XceiverClientManager.java:83)
at 
org.apache.hadoop.hdds.scm.cli.SCMCLI.createScmClient(SCMCLI.java:139)
at 
org.apache.hadoop.hdds.scm.cli.pipeline.ListPipelinesSubcommand.call(ListPipelinesSubcommand.java:55)
at 
org.apache.hadoop.hdds.scm.cli.pipeline.ListPipelinesSubcommand.call(ListPipelinesSubcommand.java:30)
at picocli.CommandLine.execute(CommandLine.java:1173)
at picocli.CommandLine.access$800(CommandLine.java:141)
at picocli.CommandLine$RunLast.handle(CommandLine.java:1367)
at picocli.CommandLine$RunLast.handle(CommandLine.java:1335)
at 
picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526)
at picocli.CommandLine.parseWithHandler(CommandLine.java:1465)
at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65)
at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56)
at org.apache.hadoop.hdds.scm.cli.SCMCLI.main(SCMCLI.java:101){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2281) ContainerStateMachine#handleWriteChunk should ignore close container exception

2019-10-10 Thread Shashikant Banerjee (Jira)
Shashikant Banerjee created HDDS-2281:
-

 Summary: ContainerStateMachine#handleWriteChunk should ignore 
close container exception 
 Key: HDDS-2281
 URL: https://issues.apache.org/jira/browse/HDDS-2281
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


Currently, ContainerStateMachine#applyTrannsaction ignores close container 
exception.Similarly,ContainerStateMachine#handleWriteChunk call also should 
ignore close container exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2269) Provide config for fair/non-fair for OM RW Lock

2019-10-10 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar resolved HDDS-2269.
---
Fix Version/s: 0.5.0
   Resolution: Fixed

> Provide config for fair/non-fair for OM RW Lock
> ---
>
> Key: HDDS-2269
> URL: https://issues.apache.org/jira/browse/HDDS-2269
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Provide config in OzoneManager Lock for fair/non-fair for OM RW Lock.
> Created based on review comments during HDDS-2244.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14245) Class cast error in GetGroups with ObserverReadProxyProvider

2019-10-10 Thread Erik Krogen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen resolved HDFS-14245.

Resolution: Fixed

> Class cast error in GetGroups with ObserverReadProxyProvider
> 
>
> Key: HDFS-14245
> URL: https://issues.apache.org/jira/browse/HDFS-14245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-12943
>Reporter: Shen Yinjie
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14245.000.patch, HDFS-14245.001.patch, 
> HDFS-14245.002.patch, HDFS-14245.003.patch, HDFS-14245.004.patch, 
> HDFS-14245.005.patch, HDFS-14245.006.patch, HDFS-14245.007.patch, 
> HDFS-14245.patch
>
>
> Run "hdfs groups" with ObserverReadProxyProvider, Exception throws as :
> {code:java}
> Exception in thread "main" java.io.IOException: Couldn't create proxy 
> provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
>  at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:261)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:119)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95)
>  at org.apache.hadoop.hdfs.tools.GetGroups.getUgmProtocol(GetGroups.java:87)
>  at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  at org.apache.hadoop.hdfs.tools.GetGroups.main(GetGroups.java:96)
> Caused by: java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:245)
>  ... 7 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.NameNodeHAProxyFactory cannot be 
> cast to org.apache.hadoop.hdfs.server.namenode.ha.ClientHAProxyFactory
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:123)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:112)
>  ... 12 more
> {code}
> similar with HDFS-14116, we did a simple fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-10-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1285/

[Oct 9, 2019 5:58:47 AM] (shashikant) HDDS-2233 - Remove ByteStringHelper and 
refactor the code to the place
[Oct 9, 2019 10:23:14 AM] (sunilg) YARN-9873. Mutation API Config Change need 
to update Version Number.
[Oct 9, 2019 11:09:09 AM] (snemeth) YARN-9356. Add more tests to ratio method 
in TestResourceCalculator.
[Oct 9, 2019 11:26:26 AM] (snemeth) YARN-9128. Use SerializationUtils from 
apache commons to serialize /
[Oct 9, 2019 1:46:16 PM] (elek) HDDS-2217. Remove log4j and audit configuration 
from the docker-config
[Oct 9, 2019 1:51:00 PM] (elek) HDDS-2217. Remove log4j and audit configuration 
from the docker-config
[Oct 9, 2019 2:16:44 PM] (elek) Squashed commit of the following:
[Oct 9, 2019 2:17:40 PM] (elek) HDDS-2265. integration.sh may report false 
negative
[Oct 9, 2019 5:50:28 PM] (surendralilhore) HDFS-14754. Erasure Coding : The 
number of Under-Replicated Blocks never




-1 overall


The following subsystems voted -1:
asflicense compile findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

FindBugs :

   module:hadoop-ozone/csi 
   Useless control flow in 
csi.v1.Csi$CapacityRange$Builder.maybeForceBuilderInitialization() At Csi.java: 
At Csi.java:[line 15977] 
   Class csi.v1.Csi$ControllerExpandVolumeRequest defines non-transient 
non-serializable instance field secrets_ In Csi.java:instance field secrets_ In 
Csi.java 
   Useless control flow in 
csi.v1.Csi$ControllerExpandVolumeRequest$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 50408] 
   Useless control flow in 
csi.v1.Csi$ControllerExpandVolumeResponse$Builder.maybeForceBuilderInitialization()
 At Csi.java: 

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-10-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/

[Oct 9, 2019 11:23:25 PM] (dazhou) HADOOP-16578 : Avoid FileSystem API calls 
when FileSystem already exists
[Oct 9, 2019 11:50:06 PM] (dazhou) HADOOP-16630 : Backport of Hadoop-16548 : 
Disable Flush() over config




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA 
   hadoop.hdfs.server.namenode.TestNameNodeHttpServerXFrame 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/470/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [160K]
   

[jira] [Created] (HDDS-2280) HddsUtils#CheckForException may return null in case the ratis exception cause is not set

2019-10-10 Thread Shashikant Banerjee (Jira)
Shashikant Banerjee created HDDS-2280:
-

 Summary: HddsUtils#CheckForException may return null in case the 
ratis exception cause is not set
 Key: HDDS-2280
 URL: https://issues.apache.org/jira/browse/HDDS-2280
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee


HddsUtils#CheckForException checks for the cause to be set properly to one of 
the defined/expected exceptions. In case, ratis throws up any runtime 
exception, HddsUtils#CheckForException can return null and lead to 
NullPointerException while write.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2266) Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (Ozone)

2019-10-10 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDDS-2266.
---
Resolution: Fixed

Thanks [~swagle] for the contribution. I have committed this.

> Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path 
> (Ozone)
> 
>
> Key: HDDS-2266
> URL: https://issues.apache.org/jira/browse/HDDS-2266
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI, Ozone Manager
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> LOG.trace and LOG.debug with logging information will be evaluated even when 
> debug/trace logging is disabled. This jira proposes to wrap all the 
> trace/debug logging with
> LOG.isDebugEnabled and LOG.isTraceEnabled to prevent the logging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop Ozone 0.4.1-alpha

2019-10-10 Thread Elek, Marton



+1

Thank you Nanda the enormous work to make this release happen.



 * GPG Signatures are fine
 * SHA512 signatures are fine
 * Can be built from the source package (in isolated environment 
without cached hadoop/ozone artifacts)

 * Started the pseudo cluster with `compose/ozone`
 * Executed FULL smoke-test suite (`cd compose && ./test-all.sh`) ALL 
passed except some intermittent issues:
   * kinit step was failed due to timeout but after that all the secure 
testss are passed. I think my laptop was too slow... + I had other CPU 
sensitive tasks in the mean time

 * Tested to create apache/hadoop-ozone:0.4.1 image
 * Using hadoop-docker-ozone/Dockerfile [1]
 * Started single, one node cluster + tested with AWS cli 
(REDUCED_REDUNDANCY) (`docker run elek/ozone:test`)
 * Started pseudo cluster (`docker run elek/ozone:test cat 
docker-compose.yaml && docker run elek/ozone:test cat docker-config`)

 * Tested with kubernetes:
   * Used the image which is created earlier
   * Replaced the images under kubernetes/examples/minikube
   * Started with kubectl `kubectl apply -f` to k3s (3!) cluster
   * Tested with `ozone sh` commands (put/get keys)


Marton

[1]:
```
docker build --build-arg 
OZONE_URL=https://home.apache.org/~nanda/ozone/release/0.4.1/RC0/hadoop-ozone-0.4.1-alpha.tar.gz 
-t elek/ozone-test .

```

On 10/4/19 7:42 PM, Nanda kumar wrote:

Hi Folks,

I have put together RC0 for Apache Hadoop Ozone 0.4.1-alpha.

The artifacts are at:
https://home.apache.org/~nanda/ozone/release/0.4.1/RC0/

The maven artifacts are staged at:
https://repository.apache.org/content/repositories/orgapachehadoop-1238/

The RC tag in git is at:
https://github.com/apache/hadoop/tree/ozone-0.4.1-alpha-RC0

And the public key used for signing the artifacts can be found at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

This release contains 363 fixes/improvements [1].
Thanks to everyone who put in the effort to make this happen.

*The vote will run for 7 days, ending on October 11th at 11:59 pm IST.*
Note: This release is alpha quality, it’s not recommended to use in
production but we believe that it’s stable enough to try out the feature
set and collect feedback.


[1] https://s.apache.org/yfudc

Thanks,
Team Ozone



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org