[jira] [Created] (HDFS-14360) some excptioins happened while using ISA-L

2019-03-11 Thread Lin Zhang (JIRA)
Lin Zhang created HDFS-14360:


 Summary: some excptioins happened while using ISA-L
 Key: HDFS-14360
 URL: https://issues.apache.org/jira/browse/HDFS-14360
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ec, erasure-coding
Reporter: Lin Zhang


I built my hadoop with isa-l supported. When I try to so some convert job, 
exception happens.   
{code:java}
//代码占位符
{code}
[2019-03-12T11:39:03.183+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
[2019-03-12T11:39:03.184+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # A fatal error has been 
detected by the Java Runtime Environment: [2019-03-12T11:39:03.184+08:00] 
[INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
[2019-03-12T11:39:03.184+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # SIGSEGV (0xb) at 
pc=0x7fc42e182683, pid=17110, tid=0x7fc40ce9f700 
[2019-03-12T11:39:03.184+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
[2019-03-12T11:39:03.184+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # JRE version: Java(TM) 
SE Runtime Environment (8.0_121-b13) (build 1.8.0_121-b13) 
[2019-03-12T11:39:03.184+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # Java VM: Java 
HotSpot(TM) 64-Bit Server VM (25.121-b13 mixed mode linux-amd64 compressed 
oops) [2019-03-12T11:39:03.184+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # Problematic frame: 
[2019-03-12T11:39:03.184+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # V [libjvm.so+0x9bd683] 
SafepointSynchronize::begin()+0x263 [2019-03-12T11:39:03.185+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
[2019-03-12T11:39:03.185+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # Failed to write core 
dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c 
unlimited" before starting Java again [2019-03-12T11:39:03.185+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
[2019-03-12T11:39:03.185+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # An error report file 
with more information is saved as: [2019-03-12T11:39:03.185+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
/software/servers/hadoop-2.7.1/hs_err_pid17110.log 
[2019-03-12T11:39:03.191+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
[2019-03-12T11:39:03.191+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # If you would like to 
submit a bug report, please visit: [2019-03-12T11:39:03.191+08:00] [INFO] 
[1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
http://bugreport.java.com/bugreport/crash.jsp [2019-03-12T11:39:03.191+08:00] 
[INFO] [1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger)] : 
1552362147634_CONVERT_CMD/test/zhanglin/1g(isLogger) : # 
[2019-03-12T11:39:07.949+08:00] [ERROR] [pool-10-thread-1] : copy file 
/test/zhanglin/1g to /test/ttlconverter/factory/test/zhanglin/1gfailed 
[2019-03-12T11:39:07.949+08:00] [INFO] [DataXceiver for client 
DFSClient_NONMAPREDUCE_1740978034_1 at /172.22.176.69:40662 [Receiving block 
BP-442378117-172.16.150.142-1552360340470:blk_-9223372036854775792_1009]] : 
Exception for 
BP-442378117-172.16.150.142-1552360340470:blk_-9223372036854775792_1009 
java.io.IOException: Premature EOF from inputStream at 
org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:212) at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
 at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
 at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
 at 

[jira] [Created] (HDDS-1250) In OM HA AllocateBlock call from OM should not happen on Ratis

2019-03-11 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1250:


 Summary: In OM HA AllocateBlock call from OM should not happen on 
Ratis
 Key: HDDS-1250
 URL: https://issues.apache.org/jira/browse/HDDS-1250
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


In OM HA, currently when allocateBlock is called, in applyTransaction() on all 
OM nodes, we make a call to SCM and write the allocateBlock information into OM 
DB. The problem with this is, every OM allocateBlock and appends new BlockInfo 
into OMKeyInfom and also this a correctness issue. (As all OM's should have the 
same block information for a key, even though eventually this might be changed 
during key commit)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1249) Fix TestOzoneManagerHttpServer & TestStorageContainerManagerHttpServer

2019-03-11 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1249:
---

 Summary: Fix TestOzoneManagerHttpServer & 
TestStorageContainerManagerHttpServer
 Key: HDDS-1249
 URL: https://issues.apache.org/jira/browse/HDDS-1249
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager, SCM
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh


Fix the following unit test failures
{code}
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer.testHttpPolicy(TestStorageContainerManagerHttpServer.java:114)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runners.Suite.runChild(Suite.java:127)
at org.junit.runners.Suite.runChild(Suite.java:26)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}

and


{code}
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer.testHttpPolicy(TestStorageContainerManagerHttpServer.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 

[jira] [Created] (HDFS-14359) Inherited ACL permissions masked when parent directory does not exist (mkdir -p)

2019-03-11 Thread Stephen O'Donnell (JIRA)
Stephen O'Donnell created HDFS-14359:


 Summary: Inherited ACL permissions masked when parent directory 
does not exist (mkdir -p)
 Key: HDFS-14359
 URL: https://issues.apache.org/jira/browse/HDFS-14359
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.3.0
Reporter: Stephen O'Donnell
Assignee: Stephen O'Donnell


There appears to be an issue with ACL inheritance if you 'mkdir' a directory 
such that the parent directories need to be created (ie mkdir -p).

If you have a folder /tmp2/testacls as:

{code}
hadoop fs -mkdir /tmp2
hadoop fs -mkdir /tmp2/testacls
hadoop fs -setfacl -m default:user:hive:rwx /tmp2/testacls
hadoop fs -setfacl -m default:user:flume:rwx /tmp2/testacls
hadoop fs -setfacl -m user:hive:rwx /tmp2/testacls
hadoop fs -setfacl -m user:flume:rwx /tmp2/testacls

hadoop fs -getfacl -R /tmp2/testacls
# file: /tmp2/testacls
# owner: kafka
# group: supergroup
user::rwx
user:flume:rwx
user:hive:rwx
group::r-x
mask::rwx
other::r-x
default:user::rwx
default:user:flume:rwx
default:user:hive:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
{code}

Then create a sub-directory in it, the ACLs are as expected:

{code}
hadoop fs -mkdir /tmp2/testacls/dir_from_mkdir

# file: /tmp2/testacls/dir_from_mkdir
# owner: sodonnell
# group: supergroup
user::rwx
user:flume:rwx
user:hive:rwx
group::r-x
mask::rwx
other::r-x
default:user::rwx
default:user:flume:rwx
default:user:hive:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
{code}

However if you mkdir -p a directory, the situation is not the same:

{code}
hadoop fs -mkdir -p /tmp2/testacls/dir_with_subdirs/sub1/sub2

# file: /tmp2/testacls/dir_with_subdirs
# owner: sodonnell
# group: supergroup
user::rwx
user:flume:rwx  #effective:r-x
user:hive:rwx   #effective:r-x
group::r-x
mask::r-x
other::r-x
default:user::rwx
default:user:flume:rwx
default:user:hive:rwx
default:group::r-x
default:mask::rwx
default:other::r-x

# file: /tmp2/testacls/dir_with_subdirs/sub1
# owner: sodonnell
# group: supergroup
user::rwx
user:flume:rwx  #effective:r-x
user:hive:rwx   #effective:r-x
group::r-x
mask::r-x
other::r-x
default:user::rwx
default:user:flume:rwx
default:user:hive:rwx
default:group::r-x
default:mask::rwx
default:other::r-x

# file: /tmp2/testacls/dir_with_subdirs/sub1/sub2
# owner: sodonnell
# group: supergroup
user::rwx
user:flume:rwx
user:hive:rwx
group::r-x
mask::rwx
other::r-x
default:user::rwx
default:user:flume:rwx
default:user:hive:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
{code}

Notice the the leaf folder "sub2" is correct, but the two ancestor folders have 
their permissions masked. I believe this is a regression from the fix for 
HDFS-6962 with dfs.namenode.posix.acl.inheritance.enabled set to true, as the 
code has changed significantly from the earlier 2.6 / 2.8 branch.

I will submit a patch for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-03-11 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1072/

[Mar 10, 2019 5:42:17 AM] (github) HDDS-1242. In S3 when bucket already exists, 
it should just return
[Mar 10, 2019 5:50:46 AM] (github) HDDS-1240. Fix check style issues caused by 
HDDS-1196.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp
 
   Dead store to download in 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.incrementDownload(SolrDocument,
 long) At 
AppCatalogSolrClient.java:org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.incrementDownload(SolrDocument,
 long) At AppCatalogSolrClient.java:[line 306] 
   Boxing/unboxing to parse a primitive 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.deployApp(String,
 Service) At 
AppCatalogSolrClient.java:org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.deployApp(String,
 Service) At AppCatalogSolrClient.java:[line 266] 
   Boxing/unboxing to parse a primitive 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.findAppStoreEntry(String)
 At 
AppCatalogSolrClient.java:org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.findAppStoreEntry(String)
 At AppCatalogSolrClient.java:[line 192] 
   Boxing/unboxing to parse a primitive 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.getRecommendedApps()
 At 
AppCatalogSolrClient.java:org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.getRecommendedApps()
 At AppCatalogSolrClient.java:[line 98] 
   Boxing/unboxing to parse a primitive 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.search(String)
 At 
AppCatalogSolrClient.java:org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.search(String)
 At AppCatalogSolrClient.java:[line 131] 
   Write to static field 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient.urlString 
from instance method new 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient() At 
AppCatalogSolrClient.java:from instance method new 
org.apache.hadoop.yarn.appcatalog.application.AppCatalogSolrClient() At 
AppCatalogSolrClient.java:[line 67] 
   org.apache.hadoop.yarn.appcatalog.model.AppDetails.getEnv() may expose 
internal representation by returning AppDetails.env At AppDetails.java:by 
returning AppDetails.env At AppDetails.java:[line 70] 
   org.apache.hadoop.yarn.appcatalog.model.AppDetails.getPorts() may expose 
internal representation by returning AppDetails.ports At AppDetails.java:by 
returning AppDetails.ports At AppDetails.java:[line 54] 
   org.apache.hadoop.yarn.appcatalog.model.AppDetails.getVolumes() may 
expose internal representation by returning AppDetails.volumes At 
AppDetails.java:by returning AppDetails.volumes At AppDetails.java:[line 62] 
   org.apache.hadoop.yarn.appcatalog.model.AppDetails.setEnv(String[]) may 
expose internal representation by storing an externally mutable object into 
AppDetails.env At AppDetails.java:by storing an externally mutable object into 
AppDetails.env At AppDetails.java:[line 74] 
   org.apache.hadoop.yarn.appcatalog.model.AppDetails.setPorts(String[]) 
may expose internal representation by storing an externally mutable object into 
AppDetails.ports At AppDetails.java:by storing an externally mutable object 
into AppDetails.ports At AppDetails.java:[line 58] 
   org.apache.hadoop.yarn.appcatalog.model.AppDetails.setVolumes(String[]) 
may expose internal representation by storing an externally mutable object into 
AppDetails.volumes At AppDetails.java:by storing an externally mutable object 
into AppDetails.volumes At AppDetails.java:[line 66] 
   org.apache.hadoop.yarn.appcatalog.model.Application doesn't override 
org.apache.hadoop.yarn.service.api.records.Service.equals(Object) At 
Application.java:At Application.java:[line 1] 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes 
   hadoop.hdfs.TestMaintenanceState 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   

[jira] [Created] (HDFS-14358) Provide LiveNode and DeadNode filter in DataNode UI

2019-03-11 Thread Ravuri Sushma sree (JIRA)
Ravuri Sushma sree created HDFS-14358:
-

 Summary: Provide LiveNode and DeadNode filter in DataNode UI
 Key: HDFS-14358
 URL: https://issues.apache.org/jira/browse/HDFS-14358
 Project: Hadoop HDFS
  Issue Type: Wish
Affects Versions: 3.1.2
Reporter: Ravuri Sushma sree






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-03-11 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Dead store to state in 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At 
FSImageFormatPBINode.java:org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At FSImageFormatPBINode.java:[line 623] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.web.TestTokenAspect 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.sls.nodemanager.TestNMSimulator 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/artifact/out/xml.txt
  [20K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/257/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   

[jira] [Reopened] (HDDS-1248) TestSecureOzoneRpcClient fails intermittently

2019-03-11 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain reopened HDDS-1248:
---

The test calls BlockTokenIdentifier#setTestStub(true) in 

TestSecureOzoneRpcClient#testKeyOpFailureWithoutBlockToken. Since testStub is 
true all the concurrently running tests fail with Block token verification 
failed exception.

> TestSecureOzoneRpcClient fails intermittently
> -
>
> Key: HDDS-1248
> URL: https://issues.apache.org/jira/browse/HDDS-1248
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Priority: Major
> Fix For: 0.4.0
>
>
>  
> TestSecureOzoneRpcClient fails intermittently with the following exception.
> {code:java}
> java.io.IOException: Unexpected Storage Container Exception: 
> java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Block token verification failed. Fail to find any token (empty or null.
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFullBuffer(BlockOutputStream.java:338)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:238)
>   at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:131)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:310)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:271)
>   at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.uploadPart(TestOzoneRpcClientAbstract.java:2188)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.doMultipartUpload(TestOzoneRpcClientAbstract.java:2131)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testMultipartUpload(TestOzoneRpcClientAbstract.java:1721)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> Caused by: java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Block token verification failed. Fail to find any token (empty or null.
>   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>   at 
> 

[jira] [Resolved] (HDDS-1248) TestSecureOzoneRpcClient fails intermittently

2019-03-11 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain resolved HDDS-1248.
---
Resolution: Duplicate

> TestSecureOzoneRpcClient fails intermittently
> -
>
> Key: HDDS-1248
> URL: https://issues.apache.org/jira/browse/HDDS-1248
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Priority: Major
> Fix For: 0.4.0
>
>
>  
> TestSecureOzoneRpcClient fails intermittently with the following exception.
> {code:java}
> java.io.IOException: Unexpected Storage Container Exception: 
> java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Block token verification failed. Fail to find any token (empty or null.
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFullBuffer(BlockOutputStream.java:338)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:238)
>   at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:131)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:310)
>   at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:271)
>   at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.uploadPart(TestOzoneRpcClientAbstract.java:2188)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.doMultipartUpload(TestOzoneRpcClientAbstract.java:2131)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testMultipartUpload(TestOzoneRpcClientAbstract.java:1721)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> Caused by: java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Block token verification failed. Fail to find any token (empty or null.
>   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.waitOnFlushFutures(BlockOutputStream.java:543)
>   at 
> 

[jira] [Created] (HDDS-1248) TestSecureOzoneRpcClient fails intermittently

2019-03-11 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-1248:
-

 Summary: TestSecureOzoneRpcClient fails intermittently
 Key: HDDS-1248
 URL: https://issues.apache.org/jira/browse/HDDS-1248
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain
 Fix For: 0.4.0


 

TestSecureOzoneRpcClient fails intermittently with the following exception.
{code:java}
java.io.IOException: Unexpected Storage Container Exception: 
java.util.concurrent.ExecutionException: 
java.util.concurrent.CompletionException: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
Block token verification failed. Fail to find any token (empty or null.
at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFullBuffer(BlockOutputStream.java:338)
at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:238)
at 
org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:131)
at 
org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:310)
at 
org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:271)
at 
org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
at 
org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.uploadPart(TestOzoneRpcClientAbstract.java:2188)
at 
org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.doMultipartUpload(TestOzoneRpcClientAbstract.java:2131)
at 
org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testMultipartUpload(TestOzoneRpcClientAbstract.java:1721)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: java.util.concurrent.ExecutionException: 
java.util.concurrent.CompletionException: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
Block token verification failed. Fail to find any token (empty or null.
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.waitOnFlushFutures(BlockOutputStream.java:543)
at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFullBuffer(BlockOutputStream.java:333)
... 35 more
Caused by: java.util.concurrent.CompletionException: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
Block token verification failed. Fail to find any token (empty or null.
at 

[jira] [Created] (HDDS-1247) Bump trunk ozone version to 0.5.0

2019-03-11 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1247:
--

 Summary: Bump trunk ozone version to 0.5.0
 Key: HDDS-1247
 URL: https://issues.apache.org/jira/browse/HDDS-1247
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: 0.5.0


ozone-0.4 branch is working, we need to update the trunk version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14357) Update the relevant docs for SCM cache support

2019-03-11 Thread Feilong He (JIRA)
Feilong He created HDFS-14357:
-

 Summary: Update the relevant docs for SCM cache support
 Key: HDFS-14357
 URL: https://issues.apache.org/jira/browse/HDFS-14357
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Feilong He
Assignee: Feilong He






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14356) Implement SCM cache with native PMDK libs

2019-03-11 Thread Feilong He (JIRA)
Feilong He created HDFS-14356:
-

 Summary: Implement SCM cache with native PMDK libs
 Key: HDFS-14356
 URL: https://issues.apache.org/jira/browse/HDFS-14356
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: caching, datanode
Reporter: Feilong He
Assignee: Feilong He






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14355) Implement SCM cache by using mapped byte buffer without PMDK dependency

2019-03-11 Thread Feilong He (JIRA)
Feilong He created HDFS-14355:
-

 Summary: Implement SCM cache by using mapped byte buffer without 
PMDK dependency
 Key: HDFS-14355
 URL: https://issues.apache.org/jira/browse/HDFS-14355
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: caching, datanode
Reporter: Feilong He






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14354) Refactor MappableBlock to align with the implementation of SCM cache

2019-03-11 Thread Feilong He (JIRA)
Feilong He created HDFS-14354:
-

 Summary: Refactor MappableBlock to align with the implementation 
of SCM cache
 Key: HDFS-14354
 URL: https://issues.apache.org/jira/browse/HDFS-14354
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: caching, datanode
Reporter: Feilong He
Assignee: Feilong He






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1246) Support ozone delegation token utility subcmd for Ozone CLI

2019-03-11 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1246:


 Summary: Support ozone delegation token utility subcmd for Ozone 
CLI
 Key: HDDS-1246
 URL: https://issues.apache.org/jira/browse/HDDS-1246
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This allow running dtutil with integration test and dev test for demo of Ozone 
security.

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14353) Erasure Coding: metrics xmitsInProgress become to negative.

2019-03-11 Thread maobaolong (JIRA)
maobaolong created HDFS-14353:
-

 Summary: Erasure Coding: metrics xmitsInProgress become to 
negative.
 Key: HDFS-14353
 URL: https://issues.apache.org/jira/browse/HDFS-14353
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, erasure-coding
Affects Versions: 3.3.0
Reporter: maobaolong






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1245) OM delegation expiration time should use Time.now instead of Time.monotonicNow

2019-03-11 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1245:


 Summary: OM delegation expiration time should use Time.now instead 
of Time.monotonicNow
 Key: HDDS-1245
 URL: https://issues.apache.org/jira/browse/HDDS-1245
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Otherwise, we will set incorrect the exp date of OM delegation like below: 

{code}
ozone dtutil print /tmp/om.dt
 
File: /tmp/om.dt
Token kind   Service  Renewer Exp date URL 
enc token

OzoneToken   om:9862  yarn*1/8/70 12:03 PM*
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org