[jira] [Created] (HDDS-674) Not able to get key after distcp job passes

2018-10-16 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-674:
-

 Summary: Not able to get key after distcp job passes
 Key: HDDS-674
 URL: https://issues.apache.org/jira/browse/HDDS-674
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


It fails with 
{code:java}
-bash-4.2$ ozone sh key get /volume2/bucket2/distcp/wordcount_input_1.txt 
/tmp/wordcountDistcp.txt
2018-10-17 00:25:07,904 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
Lookup key failed, error:KEY_NOT_FOUND{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-673) Suppress "Key not found" exception log with stack trace in OM

2018-10-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-673:
---

 Summary: Suppress "Key not found" exception log with stack trace 
in OM
 Key: HDDS-673
 URL: https://issues.apache.org/jira/browse/HDDS-673
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


We have observed too many KNF exception stack trace from OM logs. 

{code}
2018-10-16 21:20:40,248 ERROR 
org.apache.hadoop.ozone.om.KeyManagerImpl: Get key failed for volume:volume2 
bucket:bucket2 
key:testo3/.hive-staging_hive_2018-10-16_21-20-29_158_7074026959914132025-1
org.apache.hadoop.ozone.om.exceptions.OMException: Key not found
        at 
org.apache.hadoop.ozone.om.KeyManagerImpl.lookupKey(KeyManagerImpl.java:368)
        at 
org.apache.hadoop.ozone.om.OzoneManager.lookupKey(OzoneManager.java:881)
        at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.lookupKey(OzoneManagerProtocolServerSideTranslatorPB.java:400)
        at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java:39299)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-672) Spark shell throws OzoneFileSystem not found

2018-10-16 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-672:
-

 Summary: Spark shell throws OzoneFileSystem not found
 Key: HDDS-672
 URL: https://issues.apache.org/jira/browse/HDDS-672
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


Spark shell throws OzoneFileSystem not found, if the ozone jars are not 
specified in the --jars options



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-671) Hive HSI insert tries to create data in Hdfs for Ozone external table

2018-10-16 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-671:
-

 Summary: Hive HSI insert tries to create data in Hdfs for Ozone 
external table
 Key: HDDS-671
 URL: https://issues.apache.org/jira/browse/HDDS-671
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


Hive HSI insert tries to create data in Hdfs for Ozone external table, when 
"hive.server2.enable.doAs" is set to true 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-670) Hive insert fails against Ozone external table

2018-10-16 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-670:
-

 Summary: Hive insert fails against Ozone external table
 Key: HDDS-670
 URL: https://issues.apache.org/jira/browse/HDDS-670
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Namit Maheshwari


It fails with 
{code:java}
ERROR : Job Commit failed with exception 
'org.apache.hadoop.hive.ql.metadata.HiveException(Unable to move: 
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-1
 to: 
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-1.moved)'
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move: 
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-1
 to: 
o3://bucket2.volume2/testo3/.hive-staging_hive_2018-10-16_21-09-35_130_1001829123585250245-1/_tmp.-ext-1.moved
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13999) Bogus missing block warning if the file is under construction when NN starts

2018-10-16 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-13999:
--

 Summary: Bogus missing block warning if the file is under 
construction when NN starts
 Key: HDFS-13999
 URL: https://issues.apache.org/jira/browse/HDFS-13999
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
 Attachments: webui missing blocks.png

We found an interesting case where web UI displays a few missing blocks, but it 
doesn't state which files are corrupt. What'll also happen is that fsck states 
the file system is healthy. This bug is similar to HDFS-10827 and HDFS-8533. 

 (See the attachment for an example)

Using Dynamometer, I was able to reproduce the bug, and realized the the 
"missing" blocks are actually healthy, but somehow neededReplications doesn't 
get updated when NN receives block reports. What's more interesting is that the 
files associated with the "missing" blocks are under construction when NN 
starts, and so after a while NN prints file recovery log.

Given that, I determined the following code is the source of bug:
{code:java|title=BlockManager#addStoredBlock}

   // if file is under construction, then done for now
if (bc.isUnderConstruction()) {
  return storedBlock;
}
{code}
which is wrong, because a file may have multiple blocks, and the first block is 
complete. In which case, the neededReplications structure doesn't get updated 
for the first block, and thus the missing block warning on the web UI. More 
appropriately, it should check the state of the block itself, not the file.

Fortunately, it was unintentionally fixed via HDFS-9754:
{code:java}
// if block is still under construction, then done for now
if (!storedBlock.isCompleteOrCommitted()) {
  return storedBlock;
}
{code}
We should bring this fix into branch-2.7 too. That said, this is a harmless 
warning, and should go away after the under-construction-files are recovered, 
and the NN restarts (or force full block reports).

Kudos to Dynamometer! It would be impossible to reproduce this bug without the 
tool. And thanks [~smeng] for helping with the reproduction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-669) Consider removing duplication of StorageType

2018-10-16 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-669:
--

 Summary: Consider removing duplication of StorageType
 Key: HDDS-669
 URL: https://issues.apache.org/jira/browse/HDDS-669
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal


The StorageType class is currently duplicated in hadoop-hdds. We can just use 
the version in hadoop-common.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13998) ECAdmin NPE with -setPolicy -replicate

2018-10-16 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-13998:


 Summary: ECAdmin NPE with -setPolicy -replicate
 Key: HDFS-13998
 URL: https://issues.apache.org/jira/browse/HDFS-13998
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.2.0, 3.1.2
Reporter: Xiao Chen
Assignee: Zsolt Venczel


HDFS-13732 tried to improve the output of the console tool. But we missed the 
fact that for replication, {{getErasureCodingPolicy}} would return null.

This jira is to fix it in ECAdmin, and add a unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-556) Update the acceptance test location mentioned in ozone document

2018-10-16 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-556.

Resolution: Fixed

Looks like this is no longer an issue. I could not find the incorrect docs in 
trunk any more. [~xyao] please reopen if you still see this.

> Update the acceptance test location mentioned in ozone document
> ---
>
> Key: HDDS-556
> URL: https://issues.apache.org/jira/browse/HDDS-556
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.2.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: newbie
>
> This is found during the release verification of 0.2.1. In the "Building from 
> Souces" page:
> " please follow the instructions in the *README.md* in the 
> {{$hadoop_src/hadoop-ozone/acceptance-test"}}
>  
> {{The correct location should be }}
> {{"$hadoop_src/}}hadoop-ozone/dist/target/$hdds_version/smoketest"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13997) Secondary NN Web UI displays nothing, and the console log shows moment is not defined.

2018-10-16 Thread Rui Chen (JIRA)
Rui Chen created HDFS-13997:
---

 Summary: Secondary NN Web UI displays nothing, and the console log 
shows moment is not defined.
 Key: HDFS-13997
 URL: https://issues.apache.org/jira/browse/HDFS-13997
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.1.1
Reporter: Rui Chen
 Attachments: Selection_030.png, Selection_031.png

It seems that the Secondary Namenode Web UI makes use of dfs-dust.js depending 
on the moment.js. Nevertheless, the page cannot find moment.js. It then shows 
the errors as attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-668) Replica Manager should use replica with latest delete transactionID

2018-10-16 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-668:


 Summary: Replica Manager should use replica with latest delete 
transactionID
 Key: HDDS-668
 URL: https://issues.apache.org/jira/browse/HDDS-668
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain
Assignee: Lokesh Jain


Currently replica manager does not use delete trasactionID for choosing the 
replica which will be replicated. This Jira aims to store delete transactionID 
for each replica so that replica manager can choose replica with latest delete 
transactionID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-626) ozone.metadata.dirs should be tagged as REQUIRED

2018-10-16 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-626.

Resolution: Not A Bug

Thanks for checking [~dineshchitlangia]. Resolving this.

> ozone.metadata.dirs should be tagged as REQUIRED
> 
>
> Key: HDDS-626
> URL: https://issues.apache.org/jira/browse/HDDS-626
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> ozone.metadata.dirs is a required config but is missing the REQUIRED tag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-10-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/

[Oct 15, 2018 10:08:42 AM] (sunilg) YARN-8836. Add tags and attributes in 
resource definition. Contributed
[Oct 15, 2018 10:18:26 AM] (vinayakumarb) Fix potential FSImage corruption. 
Contributed by Daryn Sharp.
[Oct 15, 2018 10:40:25 AM] (bibinchundatt) YARN-8830. SLS tool fix node 
addition. Contributed by Bibin A Chundatt.
[Oct 15, 2018 3:51:57 PM] (sunilg) YARN-8869. YARN Service Client might not 
work correctly with RM REST API
[Oct 15, 2018 4:37:20 PM] (haibochen) YARN-8775. 
TestDiskFailures.testLocalDirsFailures sometimes can fail on
[Oct 15, 2018 4:51:26 PM] (inigoiri) HDFS-13987. RBF: Review of RandomResolver 
Class. Contributed by BELUGA
[Oct 15, 2018 5:51:55 PM] (xiao) HADOOP-14445. Addendum: Use 
DelegationTokenIssuer to create KMS
[Oct 15, 2018 6:52:38 PM] (jitendra) HDDS-629. Make ApplyTransaction calls in 
ContainerStateMachine
[Oct 15, 2018 9:53:55 PM] (stevel) HADOOP-15851. Disable wildfly logs to the 
console. Contributed by
[Oct 15, 2018 10:02:37 PM] (rkanter) HADOOP-15853. TestConfigurationDeprecation 
leaves behind a temp file,
[Oct 15, 2018 11:45:08 PM] (arp) HDDS-490. Improve om and scm start up options 
. Contributed by Namit




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Dead store to state in 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At 
FSImageFormatPBINode.java:org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At FSImageFormatPBINode.java:[line 663] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.protocol.TestLayoutVersion 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.fs.azure.metrics.TestNativeAzureFileSystemMetricsSystem 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/diff-compile-javac-root.txt
  [300K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/928/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   

Re: datanode hostname has different expression

2018-10-16 Thread ZongtianHou
Does anyone know about this?
> On 11 Oct 2018, at 11:46 AM, ZongtianHou  
> wrote:
> 
> Hi, everyone:
>   I use libhdfs3 API to access the hdfs, the hostnames of data block 
> locations I get from namenode are not consistent, In my local host, it may be 
> “localhost", or name of the host, ip may be 127.0.0.1 or actual ip address of 
> the host, I wonder what is the principle, how can I get it in a same format.
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-667) Fix TestOzoneFileInterfaces

2018-10-16 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-667:
--

 Summary: Fix TestOzoneFileInterfaces
 Key: HDDS-667
 URL: https://issues.apache.org/jira/browse/HDDS-667
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


The test is failing with the following exception.

This test is failing after e13a38f4bc358666e64687636cf7b025bce83b46 (HDDS-629)

{code}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.fs.ozone.TestOzoneFileInterfaces
[ERROR] Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 54.718 
s <<< FAILURE! - in org.apache.hadoop.fs.ozone.TestOzoneFileInterfaces
[ERROR] 
testOzFsReadWrite[1](org.apache.hadoop.fs.ozone.TestOzoneFileInterfaces)  Time 
elapsed: 7.1 s  <<< ERROR!
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
Unable to find the block.
at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:429)
at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:103)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupInputStream.getFromOmKeyInfo(ChunkGroupInputStream.java:290)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getKey(RpcClient.java:493)
at 
org.apache.hadoop.ozone.client.OzoneBucket.readKey(OzoneBucket.java:272)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.open(OzoneFileSystem.java:173)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899)
at 
org.apache.hadoop.fs.ozone.TestOzoneFileInterfaces.testOzFsReadWrite(TestOzoneFileInterfaces.java:175)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runners.Suite.runChild(Suite.java:127)
at org.junit.runners.Suite.runChild(Suite.java:26)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)