[jira] [Created] (HDFS-13708) change Files instead of NativeIO

2018-06-28 Thread lqjacklee (JIRA)
lqjacklee created HDFS-13708:


 Summary: change Files instead of NativeIO
 Key: HDFS-13708
 URL: https://issues.apache.org/jira/browse/HDFS-13708
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: lqjacklee


HDFS depends on the native to invoke the windows releated file operations. 
Since JDK1.7 introduces the Files to support different FileSystem which 
supports the File IO operation for different platform. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-06-28 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/512/

[Jun 27, 2018 5:35:15 PM] (sunilg) YARN-8401. [UI2] new ui is not accessible 
with out internet connection.
[Jun 27, 2018 7:39:15 PM] (xyao) HDDS-194. Remove NodePoolManager and node pool 
handling from SCM.
[Jun 27, 2018 8:25:45 PM] (xyao) Revert "HDDS-194. Remove NodePoolManager and 
node pool handling from
[Jun 27, 2018 8:28:00 PM] (xyao) HDDS-194. Remove NodePoolManager and node pool 
handling from SCM.
[Jun 27, 2018 8:35:30 PM] (xyao) HDDS-186. Create under replicated queue. 
Contributed by Ajay Kumar.
[Jun 27, 2018 8:56:45 PM] (xyao) HDDS-170. Fix 
TestBlockDeletingService#testBlockDeletionTimeout.
[Jun 27, 2018 9:15:15 PM] (aengineer) HDDS-94. Change ozone datanode command to 
start the standalone datanode
[Jun 27, 2018 9:18:25 PM] (aengineer) HDDS-193. Make Datanode heartbeat 
dispatcher in SCM event based.
[Jun 28, 2018 5:37:22 AM] (aajisaka) HADOOP-15495. Upgrade commons-lang version 
to 3.7 in
[Jun 28, 2018 5:58:40 AM] (aajisaka) HADOOP-14313. Replace/improve Hadoop's 
byte[] comparator. Contributed by
[Jun 28, 2018 6:39:33 AM] (aengineer) HDDS-195. Create generic CommandWatcher 
utility. Contributed by Elek,




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestIPC 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestFetchImage 
   hadoop.hdfs.TestFileConcurrentReader 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.TestLeaseRecovery 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   
hadoop.yarn.logaggregation.filecontroller.ifile.TestLogAggregationIndexFileController
 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   hadoop.yarn.server.nodemanager.containermanager.TestAuxServices 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.nodemanager.TestContainerExecutor 
   hadoop.yarn.server.nodemanager.TestNodeManagerResync 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   
hadoop.yarn.server.timeline.security.TestTimelineAuthenticationFilterForV1 
   hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestFSSchedulerConfigurationStore
 
   

[jira] [Created] (HDDS-205) Add metrics to HddsDispatcher

2018-06-28 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-205:
---

 Summary: Add metrics to HddsDispatcher
 Key: HDDS-205
 URL: https://issues.apache.org/jira/browse/HDDS-205
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This patch adds metrics to newly added HddsDispatcher.

This uses, already existing ContainerMetrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13707) [PROVIDED Storage] Fix failing integration tests in {{ITestProvidedImplementation}}

2018-06-28 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-13707:
-

 Summary: [PROVIDED Storage] Fix failing integration tests in 
{{ITestProvidedImplementation}}
 Key: HDFS-13707
 URL: https://issues.apache.org/jira/browse/HDFS-13707
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Virajith Jalaparti
Assignee: Virajith Jalaparti


Many tests in {{ITestProvidedImplementation}} use {{TextFileRegionAliasMap}} as 
the AliasMap, which stores and retries path handles for provided locations 
using UTF-8 encoding. HDFS-13186 implements the path handle semantics for 
{{RawLocalFileSystem}} using {{LocalFileSystemPathHandle}}. Storing and 
retrieving these path handles as UTF-8 strings in {{TextFileRegionAliasMap}} 
results in improper serialization/deserialization, and fails the associated 
tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13706) ClientGCIContext should be correctly named ClientGSIContext

2018-06-28 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-13706:
--

 Summary: ClientGCIContext should be correctly named 
ClientGSIContext
 Key: HDFS-13706
 URL: https://issues.apache.org/jira/browse/HDFS-13706
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Konstantin Shvachko


GSI stands for Global State Id. It's a client-side counterpart of NN's 
{{GlobalStateIdContext}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[DISCUSS]Merge ContainerIO branch (HDDS-48) in to trunk

2018-06-28 Thread Bharat Viswanadham
Hi everyone,

I’d like to start a thread to discuss merging the HDDS-48 branch to trunk. The 
ContainerIO work refactors the HDDS Datanode IO path to enforce clean 
separation between the Container management and the Storage layers.

Note: HDDS/Ozone code is not compiled by default in trunk. The 'hdds' maven 
profile must be enabled to compile the branch payload.
 
The merge payload includes the following key improvements:
1. Support multiple container types on the datanode.
2. Adopt a new disk layout for the containers that supports future upgrades.
3. Support volume Choosing policy for container data locations.
4. Changed the format of the .container file to a human-readable format (yaml)
 
Below are the links for design documents attached to HDDS-48.
 
https://issues.apache.org/jira/secure/attachment/12923107/ContainerIO-StorageManagement-DesignDoc.pdf
https://issues.apache.org/jira/secure/attachment/12923108/HDDS DataNode Disk 
Layout.pdf
 
The branch is ready to merge. Over the next week we will clean up the unused 
classes, fix old integration tests and continue testing the changes.
 
Thanks to Hanisha Koneru, Arpit Agarwal, Anu Engineer, Jitendra Pandey,  Xiaoyu 
Yao, Ajay Kumar, Mukul Kumar Singh, Marton Elek and Shashikant Banerjee for 
their contributions in design, development and code reviews.

Thanks,
Bharat



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-204) Fix Integration tests in Ozone to modify according to ContainerIO classes

2018-06-28 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-204:
---

 Summary: Fix Integration tests in Ozone to modify according to 
ContainerIO classes
 Key: HDDS-204
 URL: https://issues.apache.org/jira/browse/HDDS-204
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


Fix Integration tests in Ozone to modify according to ContainerIO classes

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-06-28 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/

[Jun 27, 2018 2:25:57 AM] (wangda) YARN-8423. GPU does not get released even 
though the application gets
[Jun 27, 2018 2:27:17 AM] (wangda) YARN-8464. Async scheduling thread could be 
interrupted when there are
[Jun 27, 2018 5:35:15 PM] (sunilg) YARN-8401. [UI2] new ui is not accessible 
with out internet connection.
[Jun 27, 2018 7:39:15 PM] (xyao) HDDS-194. Remove NodePoolManager and node pool 
handling from SCM.
[Jun 27, 2018 8:25:45 PM] (xyao) Revert "HDDS-194. Remove NodePoolManager and 
node pool handling from
[Jun 27, 2018 8:28:00 PM] (xyao) HDDS-194. Remove NodePoolManager and node pool 
handling from SCM.
[Jun 27, 2018 8:35:30 PM] (xyao) HDDS-186. Create under replicated queue. 
Contributed by Ajay Kumar.
[Jun 27, 2018 8:56:45 PM] (xyao) HDDS-170. Fix 
TestBlockDeletingService#testBlockDeletionTimeout.
[Jun 27, 2018 9:15:15 PM] (aengineer) HDDS-94. Change ozone datanode command to 
start the standalone datanode
[Jun 27, 2018 9:18:25 PM] (aengineer) HDDS-193. Make Datanode heartbeat 
dispatcher in SCM event based.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/diff-compile-javac-root.txt
  [352K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/diff-checkstyle-root.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [48K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/825/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   

[jira] [Created] (HDDS-203) Add getCommittedBlockLength API in datanode

2018-06-28 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-203:


 Summary: Add getCommittedBlockLength API in datanode
 Key: HDDS-203
 URL: https://issues.apache.org/jira/browse/HDDS-203
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client, Ozone Datanode
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.2.1


When a container gets closed on the Datanode while the active Writes are 
happening

by OzoneClient, Client Write requests will fail with ContainerClosedException. 
In such case,

ozone Client needs to enquire the last committed block length from dataNodes 
and update the

OzoneMaster with the updated length for the block. This Jira proposes to add to 
RPC call to get the last committed length of a block on a Datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-202) Doclet build fails in ozonefs

2018-06-28 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDDS-202:
-

 Summary: Doclet build fails in ozonefs
 Key: HDDS-202
 URL: https://issues.apache.org/jira/browse/HDDS-202
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


{noformat}
$ mvn clean install -DskipTests -DskipShade -Phdds -Pdist --projects 
hadoop-ozone/ozonefs
...
[INFO] --- maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) @ 
hadoop-ozone-filesystem ---
[INFO]
ExcludePrivateAnnotationsStandardDoclet
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 13.223 s
[INFO] Finished at: 2018-06-28T19:46:49+09:00
[INFO] Final Memory: 122M/1196M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) on 
project hadoop-ozone-filesystem: MavenReportException: Error while generating 
Javadoc:
[ERROR] Exit code: 1 - Picked up _JAVA_OPTIONS: -Duser.language=en
[ERROR] java.lang.ArrayIndexOutOfBoundsException: 0
[ERROR] at 
com.sun.tools.doclets.formats.html.ConfigurationImpl.setTopFile(ConfigurationImpl.java:537)
[ERROR] at 
com.sun.tools.doclets.formats.html.ConfigurationImpl.setSpecificDocletOptions(ConfigurationImpl.java:309)
[ERROR] at 
com.sun.tools.doclets.internal.toolkit.Configuration.setOptions(Configuration.java:560)
[ERROR] at 
com.sun.tools.doclets.internal.toolkit.AbstractDoclet.startGeneration(AbstractDoclet.java:134)
[ERROR] at 
com.sun.tools.doclets.internal.toolkit.AbstractDoclet.start(AbstractDoclet.java:82)
[ERROR] at 
com.sun.tools.doclets.formats.html.HtmlDoclet.start(HtmlDoclet.java:80)
[ERROR] at 
com.sun.tools.doclets.standard.Standard.start(Standard.java:39)
[ERROR] at 
org.apache.hadoop.classification.tools.ExcludePrivateAnnotationsStandardDoclet.start(ExcludePrivateAnnotationsStandardDoclet.java:41)
[ERROR] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[ERROR] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[ERROR] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[ERROR] at java.lang.reflect.Method.invoke(Method.java:498)
[ERROR] at 
com.sun.tools.javadoc.DocletInvoker.invoke(DocletInvoker.java:310)
[ERROR] at 
com.sun.tools.javadoc.DocletInvoker.start(DocletInvoker.java:189)
[ERROR] at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:366)
[ERROR] at com.sun.tools.javadoc.Start.begin(Start.java:219)
[ERROR] at com.sun.tools.javadoc.Start.begin(Start.java:205)
[ERROR] at com.sun.tools.javadoc.Main.execute(Main.java:64)
[ERROR] at com.sun.tools.javadoc.Main.main(Main.java:54)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13705) The native ISA-L library loading failure should be made warning rather than an error message

2018-06-28 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDFS-13705:
--

 Summary: The native ISA-L library loading failure should be made 
warning rather than an error message
 Key: HDFS-13705
 URL: https://issues.apache.org/jira/browse/HDFS-13705
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee


If the loading of native ISA-L library fails, the java built in library is used 
for Erasure coding.

The loading failure should be logged as warning and the stack trace below 
should be suppressed.

 
{code:java}
 
18/06/26 10:22:34 ERROR erasurecode.ErasureCodeNative: Loading ISA-L failed 
java.lang.UnsatisfiedLinkError: Failed to load libisal.so.2 (libisal.so.2: 
cannot open shared object file: No such file or directory) at 
org.apache.hadoop.io.erasurecode.ErasureCodeNative.loadLibrary(Native Method) 
at 
org.apache.hadoop.io.erasurecode.ErasureCodeNative.(ErasureCodeNative.java:46)
 at 
org.apache.hadoop.io.erasurecode.rawcoder.NativeRSRawEncoder.(NativeRSRawEncoder.java:34)
 at 
org.apache.hadoop.io.erasurecode.rawcoder.NativeRSRawErasureCoderFactory.createEncoder(NativeRSRawErasureCoderFactory.java:35)
 at 
org.apache.hadoop.io.erasurecode.CodecUtil.createRawEncoderWithFallback(CodecUtil.java:177)
 at 
org.apache.hadoop.io.erasurecode.CodecUtil.createRawEncoder(CodecUtil.java:129) 
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.(DFSStripedOutputStream.java:309)
 at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:307){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org