[jira] [Created] (HADOOP-18238) Hadoop 3.3.1 SFTPFileSystem.close() method have problem
yi liu created HADOOP-18238: --- Summary: Hadoop 3.3.1 SFTPFileSystem.close() method have problem Key: HADOOP-18238 URL: https://issues.apache.org/jira/browse/HADOOP-18238 Project: Hadoop Common Issue Type: Bug Components: common Affects Versions: 3.3.1 Reporter: yi liu @Override public void close() throws IOException { if (closed.getAndSet(true)) { return; } try { super.close(); } finally { if (connectionPool != null) { connectionPool.shutdown(); } } } if you exe this method ,the fs can not exec deleteOnExsist method,because the fs is closed. 如果手动调用,sftp fs执行close方法关闭连接池,让jvm能正常退出,deleteOnExsist 将因为fs已关闭无法执行成功。如果不关闭,则连接池不会释放,jvm不能退出。 https://issues.apache.org/jira/browse/HADOOP-17528,这是3.2.0 sftpfilesystem的问题 -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic
Yi Liu created HADOOP-12295: --- Summary: Improve NetworkTopology#InnerNode#remove logic Key: HADOOP-12295 URL: https://issues.apache.org/jira/browse/HADOOP-12295 Project: Hadoop Common Issue Type: Improvement Reporter: Yi Liu Assignee: Yi Liu In {{ NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get the parent node, no need to loop the {{children}} list. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-11908) Erasure coding: Should be able to encode part of parity blocks.
[ https://issues.apache.org/jira/browse/HADOOP-11908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-11908. - Resolution: Duplicate > Erasure coding: Should be able to encode part of parity blocks. > --- > > Key: HADOOP-11908 > URL: https://issues.apache.org/jira/browse/HADOOP-11908 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Yi Liu >Assignee: Kai Zheng > > {code} > public void encode(ByteBuffer[] inputs, ByteBuffer[] outputs); > {code} > Currently when we do encode, the outputs are all parity blocks, we should be > able to encode part of parity blocks. > This is required when we do datanode striped block recovery, if one or more > parity blocks are missed, we need to do encode to recovery them. Only encode > part of parity blocks should be more efficient than all. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-11961) Add interface of whether codec has chunk boundary to Erasure coder
[ https://issues.apache.org/jira/browse/HADOOP-11961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-11961. - Resolution: Invalid > Add interface of whether codec has chunk boundary to Erasure coder > -- > > Key: HADOOP-11961 > URL: https://issues.apache.org/jira/browse/HADOOP-11961 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Yi Liu > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11961) Add isLinear interface to Erasure coder
Yi Liu created HADOOP-11961: --- Summary: Add isLinear interface to Erasure coder Key: HADOOP-11961 URL: https://issues.apache.org/jira/browse/HADOOP-11961 Project: Hadoop Common Issue Type: Sub-task Reporter: Yi Liu Assignee: Yi Liu Today, we have a discussion including [~zhz], [~drankye], etc., also discuss in HDFS-8347. Some coder like {{RS}} and {{XOR}} is linear, some have coding boundary like HitchHicker. If the coder is linear, we can decode at any size, and we don't need to padding inputs to *chunksize*, if the coder is not linear, the inputs need to padding to *chunksize*, then do decode. This interface is important for performance, and can save memory/disk space since the parity cells are the same as first data cell (less than codec chunksize). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11908) Erasure coding: Should be able to encode part of parity blocks.
Yi Liu created HADOOP-11908: --- Summary: Erasure coding: Should be able to encode part of parity blocks. Key: HADOOP-11908 URL: https://issues.apache.org/jira/browse/HADOOP-11908 Project: Hadoop Common Issue Type: Sub-task Reporter: Yi Liu Assignee: Yi Liu {code} public void encode(ByteBuffer[] inputs, ByteBuffer[] outputs); {code} Currently when we do encode, the outputs are all parity blocks, we should be able to encode part of parity blocks. This is required when we do datanode striped block recovery, if one or more parity blocks are missed, we need to do encode to recovery them. Only encode part of parity blocks should be more efficient than all. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11595) Add default implementation for AbstractFileSystem#truncate
Yi Liu created HADOOP-11595: --- Summary: Add default implementation for AbstractFileSystem#truncate Key: HADOOP-11595 URL: https://issues.apache.org/jira/browse/HADOOP-11595 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.7.0 Reporter: Yi Liu Assignee: Yi Liu As [~cnauroth] commented in HADOOP-11510, we should add a default implementation for AbstractFileSystem#truncate to avoid backwards-compatibility -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-11533) Hadoop First Weely Plan
[ https://issues.apache.org/jira/browse/HADOOP-11533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-11533. - Resolution: Invalid Please don't create JIRA for your own work plan or task list. > Hadoop First Weely Plan > --- > > Key: HADOOP-11533 > URL: https://issues.apache.org/jira/browse/HADOOP-11533 > Project: Hadoop Common > Issue Type: Task >Reporter: dengjie > > This is first work plan,detail work include install hadoop env and test > hadoop can be used. > start date:2015-02-01 > end date:2015-02-06 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11510) Expose truncate API via FileContext
Yi Liu created HADOOP-11510: --- Summary: Expose truncate API via FileContext Key: HADOOP-11510 URL: https://issues.apache.org/jira/browse/HADOOP-11510 Project: Hadoop Common Issue Type: New Feature Reporter: Yi Liu Assignee: Yi Liu We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11452) Revisit org.apache.hadoop.fs.FileSystem#rename
Yi Liu created HADOOP-11452: --- Summary: Revisit org.apache.hadoop.fs.FileSystem#rename Key: HADOOP-11452 URL: https://issues.apache.org/jira/browse/HADOOP-11452 Project: Hadoop Common Issue Type: Task Components: fs Reporter: Yi Liu Assignee: Yi Liu Currently in {{FileSystem}}, {{rename}} with _Rename options_ is protected and with _deprecated_ annotation. And the default implementation is not atomic. So this method is not able to be used outside. On the other hand, HDFS has a good and atomic implementation. (Also an interesting thing in {{DFSClient}}, the _deprecated_ annotations for these two methods are opposite). It makes sense to make public for {{rename}} with _Rename options_, since it's atomic for rename+overwrite, also it saves RPC calls if user desires rename+overwrite. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11424) Fix failure for TestOsSecureRandom
Yi Liu created HADOOP-11424: --- Summary: Fix failure for TestOsSecureRandom Key: HADOOP-11424 URL: https://issues.apache.org/jira/browse/HADOOP-11424 Project: Hadoop Common Issue Type: Bug Components: test Reporter: Yi Liu Assignee: Yi Liu Recently I usually see failure of {{testOsSecureRandomSetConf}} in TestOsSecureRandom. https://builds.apache.org/job/PreCommit-HADOOP-Build/5298//testReport/org.apache.hadoop.crypto.random/TestOsSecureRandom/testOsSecureRandomSetConf/ {code} java.lang.Exception: test timed out after 12 milliseconds at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:272) at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) at java.io.InputStreamReader.read(InputStreamReader.java:184) at java.io.BufferedReader.fill(BufferedReader.java:154) at java.io.BufferedReader.read1(BufferedReader.java:205) at java.io.BufferedReader.read(BufferedReader.java:279) at org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:735) at org.apache.hadoop.util.Shell.runCommand(Shell.java:531) at org.apache.hadoop.util.Shell.run(Shell.java:456) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722) at org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf(TestOsSecureRandom.java:149) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11422) Check CryptoCodec is AES-CTR for Crypto input/output stream
Yi Liu created HADOOP-11422: --- Summary: Check CryptoCodec is AES-CTR for Crypto input/output stream Key: HADOOP-11422 URL: https://issues.apache.org/jira/browse/HADOOP-11422 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.6.0 Reporter: Yi Liu Assignee: Yi Liu Priority: Minor {{CryptoInputStream}} and {{CryptoOutputStream}} require AES-CTR as the algorithm/mode, although there is only AES-CTR implementation currently, but we'd better to check it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11413) Remove unused CryptoCodec in org.apache.hadoop.fs.Hdfs
Yi Liu created HADOOP-11413: --- Summary: Remove unused CryptoCodec in org.apache.hadoop.fs.Hdfs Key: HADOOP-11413 URL: https://issues.apache.org/jira/browse/HADOOP-11413 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 2.6.0 Reporter: Yi Liu Assignee: Yi Liu Priority: Minor in org.apache.hadoop.fs.Hdfs, the {{CryptoCodec}} is unused, and we can remove it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11358) Tests for encryption/decryption with IV calculation overflow
Yi Liu created HADOOP-11358: --- Summary: Tests for encryption/decryption with IV calculation overflow Key: HADOOP-11358 URL: https://issues.apache.org/jira/browse/HADOOP-11358 Project: Hadoop Common Issue Type: Test Components: security, test Affects Versions: 2.6.0 Reporter: Yi Liu Assignee: Yi Liu Priority: Minor As discussed in HADOOP-11343, add more tests to cover encryption/decryption with IV calculation overflow -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11339) Reuse buffer for Hadoop RPC
Yi Liu created HADOOP-11339: --- Summary: Reuse buffer for Hadoop RPC Key: HADOOP-11339 URL: https://issues.apache.org/jira/browse/HADOOP-11339 Project: Hadoop Common Issue Type: Improvement Components: ipc, performance Reporter: Yi Liu Assignee: Yi Liu For Hadoop RPCs, we will try to reuse the available connections. But when we process each rpc in the same connection, we will allocate a fresh heap byte buffer to store the rpc bytes data. The rpc message may be very large, i.e., datanode blocks report. There is chance to cause full gc as discussed in HDFS-7435 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11250) fix endmacro of set_find_shared_library_without_version in CMakeLists
Yi Liu created HADOOP-11250: --- Summary: fix endmacro of set_find_shared_library_without_version in CMakeLists Key: HADOOP-11250 URL: https://issues.apache.org/jira/browse/HADOOP-11250 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 2.6.0 Reporter: Yi Liu Assignee: Yi Liu Priority: Minor There is a small nit for {{set_find_shared_library_without_version}} in CMakeLists.txt: {code} endmacro(set_find_shared_library_version LVERS) {code} should be {code} endmacro(set_find_shared_library_without_version) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11249) Improve Openssl version detection
Yi Liu created HADOOP-11249: --- Summary: Improve Openssl version detection Key: HADOOP-11249 URL: https://issues.apache.org/jira/browse/HADOOP-11249 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 2.6.0 Reporter: Yi Liu As discussed in HADOOP-11216, we could do improvement for Openssl version detection. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11216) Improve Openssl library finding
Yi Liu created HADOOP-11216: --- Summary: Improve Openssl library finding Key: HADOOP-11216 URL: https://issues.apache.org/jira/browse/HADOOP-11216 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 2.6.0 Reporter: Yi Liu Assignee: Yi Liu When we compile Openssl 1.0.0\(x\) or 1.0.1\(x\) using default options, there will be {{libcrypto.so.1.0.0}} in output lib dir, so we expect this version suffix in cmake build file {code} SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES CMAKE_FIND_LIBRARY_SUFFIXES) set_find_shared_library_version("1.0.0") SET(OPENSSL_NAME "crypto") {code} If we don't bundle the crypto shared library in Hadoop distribution, then Hadoop will try to find crypto library in system path when running. But in real linux distribution, there may be no {{libcrypto.so.1.0.0}} or {{libcrypto.so}} even the system embedded openssl is 1.0.1\(x\). Then we need to make symbolic link. This JIRA is to improve the Openssl library finding. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11204) Fix incorrect property in hadoop-kms/src/main/conf/kms-site.xml
Yi Liu created HADOOP-11204: --- Summary: Fix incorrect property in hadoop-kms/src/main/conf/kms-site.xml Key: HADOOP-11204 URL: https://issues.apache.org/jira/browse/HADOOP-11204 Project: Hadoop Common Issue Type: Bug Components: kms Affects Versions: 2.5.0 Reporter: Yi Liu Assignee: Yi Liu Priority: Minor {{hadoop.security.keystore.JavaKeyStoreProvider.password}} doesn't exist, it should be {{hadoop.security.keystore.java-keystore-provider.password-file}} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11164) Fix several issues of hadoop security configuration in user doc.
Yi Liu created HADOOP-11164: --- Summary: Fix several issues of hadoop security configuration in user doc. Key: HADOOP-11164 URL: https://issues.apache.org/jira/browse/HADOOP-11164 Project: Hadoop Common Issue Type: Bug Components: documentation, security Reporter: Yi Liu Assignee: Yi Liu Priority: Trivial There are several issues of secure mode in user doc: {{dfs.namenode.secondary.keytab.file}} should be {{dfs.secondary.namenode.keytab.file}}, {{dfs.namenode.secondary.kerberos.principal}} should be {{dfs.secondary.namenode.kerberos.principal}}. {{dfs.namenode.kerberos.https.principal}} doesn't exist, it should be {{dfs.namenode.kerberos.internal.spnego.principal}}. {{dfs.namenode.secondary.kerberos.https.principal}} doesn't exist, it should be {{dfs.secondary.namenode.kerberos.internal.spnego.principal}}. {{dfs.datanode.kerberos.https.principal}} doesn't exist, we can remove it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-11155) Add auth parameters for KMS request and fix TestEncryptionZonesWithKMS issue
[ https://issues.apache.org/jira/browse/HADOOP-11155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-11155. - Resolution: Duplicate duplicate with HADOOP-11151. > Add auth parameters for KMS request and fix TestEncryptionZonesWithKMS issue > > > Key: HADOOP-11155 > URL: https://issues.apache.org/jira/browse/HADOOP-11155 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Yi Liu >Assignee: Yi Liu > > We need to add auth parameters when doing KMS request. Currently we could see > {quote} > 2014-09-29 23:13:01,488 WARN server.AuthenticationFilter > (AuthenticationFilter.java:doFilter(551)) - Authentication exception: > Anonymous requests are disallowed > org.apache.hadoop.security.authentication.client.AuthenticationException: > Anonymous requests are disallowed > at > org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:331) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507) > {quote} > This JIRA will also try to resolve the failure of > {{TestEncryptionZonesWithKMS}} in Jenkins report. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11155) Add auth parameters for KMS request and fix TestEncryptionZonesWithKMS issue
Yi Liu created HADOOP-11155: --- Summary: Add auth parameters for KMS request and fix TestEncryptionZonesWithKMS issue Key: HADOOP-11155 URL: https://issues.apache.org/jira/browse/HADOOP-11155 Project: Hadoop Common Issue Type: Bug Components: kms Affects Versions: 2.6.0 Reporter: Yi Liu Assignee: Yi Liu We need to add auth parameters when doing KMS request. Currently we could see {quote} 2014-09-29 23:13:01,488 WARN server.AuthenticationFilter (AuthenticationFilter.java:doFilter(551)) - Authentication exception: Anonymous requests are disallowed org.apache.hadoop.security.authentication.client.AuthenticationException: Anonymous requests are disallowed at org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:331) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507) {quote} This JIRA will also try to resolve the failure of {{TestEncryptionZonesWithKMS}} in Jenkins report. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-11129) Fix findbug issue introduced by HADOOP-11017
[ https://issues.apache.org/jira/browse/HADOOP-11129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-11129. - Resolution: Duplicate The findbug issue was already reported. Duplicate it with HADOOP-11129. > Fix findbug issue introduced by HADOOP-11017 > > > Key: HADOOP-11129 > URL: https://issues.apache.org/jira/browse/HADOOP-11129 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Yi Liu >Assignee: Yi Liu > > This JIRA is to fix findbug issue introduced by HADOOP-11017 > {quote} > Inconsistent synchronization of > org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.delegationTokenSequenceNumber > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11129) Fix findbug issue introduced by HADOOP-11017
Yi Liu created HADOOP-11129: --- Summary: Fix findbug issue introduced by HADOOP-11017 Key: HADOOP-11129 URL: https://issues.apache.org/jira/browse/HADOOP-11129 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.6.0 Reporter: Yi Liu Assignee: Yi Liu This JIRA is to fix findbug issue introduced by HADOOP-11017 {quote} Inconsistent synchronization of org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.delegationTokenSequenceNumber {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11040) Return value of read(ByteBuffer buf) in CryptoInputStream is incorrect in some cases
Yi Liu created HADOOP-11040: --- Summary: Return value of read(ByteBuffer buf) in CryptoInputStream is incorrect in some cases Key: HADOOP-11040 URL: https://issues.apache.org/jira/browse/HADOOP-11040 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.6.0 Reporter: Yi Liu Assignee: Yi Liu In {{CryptoInputStream}}, for {{int read(ByteBuffer buf))}}, if there is unread value in outBuffer, then the current return value is incorrect. This case happens when caller uses bytes array read firstly and then do the ByteBuffer read. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11039) ByteBufferReadable API doc is inconsistent with the implementations.
Yi Liu created HADOOP-11039: --- Summary: ByteBufferReadable API doc is inconsistent with the implementations. Key: HADOOP-11039 URL: https://issues.apache.org/jira/browse/HADOOP-11039 Project: Hadoop Common Issue Type: Bug Components: documentation Reporter: Yi Liu Assignee: Yi Liu Priority: Minor In {{ByteBufferReadable}}, API doc of {{int read(ByteBuffer buf)}} says: {quote} After a successful call, buf.position() and buf.limit() should be unchanged, and therefore any data can be immediately read from buf. buf.mark() may be cleared or updated. {quote} {quote} @param buf the ByteBuffer to receive the results of the read operation. Up to buf.limit() - buf.position() bytes may be read. {quote} But actually the implementations (e.g. {{DFSInputStream}}, {{RemoteBlockReader2}}) would be: *Upon return, buf.position() will be advanced by the number of bytes read.* code implementation of {{RemoteBlockReader2}} is as following: {code} @Override public int read(ByteBuffer buf) throws IOException { if (curDataSlice == null || curDataSlice.remaining() == 0 && bytesNeededToFinish > 0) { readNextPacket(); } if (curDataSlice.remaining() == 0) { // we're at EOF now return -1; } int nRead = Math.min(curDataSlice.remaining(), buf.remaining()); ByteBuffer writeSlice = curDataSlice.duplicate(); writeSlice.limit(writeSlice.position() + nRead); buf.put(writeSlice); curDataSlice.position(writeSlice.position()); return nRead; } {code} This description is very important and will guide user how to use this API, and all the implementations should keep the same behavior. We should fix the javadoc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-10967) Improve DefaultCryptoExtension#generateEncryptedKey performance
Yi Liu created HADOOP-10967: --- Summary: Improve DefaultCryptoExtension#generateEncryptedKey performance Key: HADOOP-10967 URL: https://issues.apache.org/jira/browse/HADOOP-10967 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 2.6.0 Reporter: Yi Liu Assignee: Yi Liu This JIRA is to improve generateEncryptedKey performance: *1.* SecureRandom#generateSeed is very slow, we should use SecureRandom#nextBytes to generate the {{IV}} which is much faster. *2.* Define SecureRandom as threadlocal object which can improve the performance a bit. *3.* Use {{new SecureRandom()}} instead of SHA1PRNG, the former has better entropy. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10964) Small fix for NetworkTopologyWithNodeGroup#sortByDistance
Yi Liu created HADOOP-10964: --- Summary: Small fix for NetworkTopologyWithNodeGroup#sortByDistance Key: HADOOP-10964 URL: https://issues.apache.org/jira/browse/HADOOP-10964 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.6.0 Reporter: Yi Liu Assignee: Yi Liu Priority: Minor {{nodes.length}} should be {{activeLen}}. {code} @Override public void sortByDistance(Node reader, Node[] nodes, int activeLen, long seed, boolean randomizeBlockLocationsPerBlock) { // If reader is not a datanode (not in NetworkTopology tree), we need to // replace this reader with a sibling leaf node in tree. if (reader != null && !this.contains(reader)) { Node nodeGroup = getNode(reader.getNetworkLocation()); if (nodeGroup != null && nodeGroup instanceof InnerNode) { InnerNode parentNode = (InnerNode) nodeGroup; // replace reader with the first children of its parent in tree reader = parentNode.getLeaf(0, null); } else { return; } } super.sortByDistance(reader, nodes, nodes.length, seed, randomizeBlockLocationsPerBlock); } {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10938) Remove thread-safe description in PositionedReadable javadoc
Yi Liu created HADOOP-10938: --- Summary: Remove thread-safe description in PositionedReadable javadoc Key: HADOOP-10938 URL: https://issues.apache.org/jira/browse/HADOOP-10938 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 2.6.0 Reporter: Yi Liu Assignee: Yi Liu According to discussion in HDFS-6813, we may need to remove thread-safe description in PositionedReadable javadoc, since DFSInputStream, WebhdfsFileSystem#inputStream, HarInputStream don't implement them with thread-safe. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10930) HarFsInputStream should implement PositionedReadable with thead-safe.
Yi Liu created HADOOP-10930: --- Summary: HarFsInputStream should implement PositionedReadable with thead-safe. Key: HADOOP-10930 URL: https://issues.apache.org/jira/browse/HADOOP-10930 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.6.0 Reporter: Yi Liu Assignee: Yi Liu {{PositionedReadable}} definition requires the implementations for its interfaces should be thread-safe. HarFsInputStream doesn't implement these interfaces with tread-safe, this JIRA is to fix this. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-9332) Crypto codec implementations for AES
[ https://issues.apache.org/jira/browse/HADOOP-9332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-9332. Resolution: Duplicate We will not use this approach any more, instead, we use the approach in HADOOP-10150, so mark this JIRA as duplicate. > Crypto codec implementations for AES > > > Key: HADOOP-9332 > URL: https://issues.apache.org/jira/browse/HADOOP-9332 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: 3.0.0 >Reporter: Yi Liu >Assignee: Yi Liu > Fix For: 3.0.0 > > Attachments: HADOOP-9332.patch, HADOOP-9332.patch > > > This JIRA task provides three crypto codec implementations based on the > Hadoop crypto codec framework. They are: > 1.Simple AES Codec. AES codec implementation based on AES-NI. (Not > splittable) > 2.AES Codec. AES codec implementation based on AES-NI in splittable > format. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10853) Refactor get instance of CryptoCodec and support create via algorithm/mode/padding.
[ https://issues.apache.org/jira/browse/HADOOP-10853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10853. - Resolution: Fixed Fix Version/s: (was: 3.0.0) fs-encryption (HADOOP-10150 and HDFS-6134) Hadoop Flags: Reviewed > Refactor get instance of CryptoCodec and support create via > algorithm/mode/padding. > --- > > Key: HADOOP-10853 > URL: https://issues.apache.org/jira/browse/HADOOP-10853 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Reporter: Yi Liu >Assignee: Yi Liu > Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) > > Attachments: HADOOP-10853.001.patch, HADOOP-10853.002.patch, > HADOOP-10853.003.patch, HADOOP-10853.004.patch > > > We should be able to create instance of *CryptoCodec*: > * via codec class name. (Applications may have config for different crypto > codecs) > * via algorithm/mode/padding. (For automatically decryption, we need to find > correct crypto codec and proper implementation) > * a default crypto codec through specific config. > This JIRA is for > * Create instance through cipher suite(algorithm/mode/padding) > * Refactor create instance of {{CryptoCodec}} into {{CryptoCodecFactory}} > We need to get all crypto codecs in system, this can be done via a Java > ServiceLoader + hadoop.security.crypto.codecs config. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10853) Refactor create instance of CryptoCodec and add CryptoCodecFactory
Yi Liu created HADOOP-10853: --- Summary: Refactor create instance of CryptoCodec and add CryptoCodecFactory Key: HADOOP-10853 URL: https://issues.apache.org/jira/browse/HADOOP-10853 Project: Hadoop Common Issue Type: Sub-task Components: security Reporter: Yi Liu Assignee: Yi Liu We should be able to create instance of *CryptoCodec*: * via codec class name. (Applications may have config for different crypto codecs) * via algorithm/mode/padding. (For automatically decryption, we need to find correct crypto codec and proper implementation) * a default crypto codec through specific config. This JIRA is for * Create instance through cipher suite(algorithm/mode/padding) * Refactor create instance of {{CryptoCodec}} into {{CryptoCodecFactory}} We need to get all crypto codecs in system, this can be done via a Java ServiceLoader + hadoop.security.crypto.codecs config. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10735) Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native support.
[ https://issues.apache.org/jira/browse/HADOOP-10735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10735. - Resolution: Fixed Hadoop Flags: Reviewed > Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native > support. > - > > Key: HADOOP-10735 > URL: https://issues.apache.org/jira/browse/HADOOP-10735 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) >Reporter: Yi Liu >Assignee: Yi Liu > Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) > > Attachments: HADOOP-10735.001.patch, HADOOP-10735.002.patch, > HADOOP-10735.003.patch, HADOOP-10735.004.patch, HADOOP-10735.005.patch, > HADOOP-10735.006.patch, HADOOP-10735.007.patch, HADOOP-10735.008.patch > > > If there is no native support or OpenSSL version is too low not supporting > AES-CTR, but {{OpensslAesCtrCryptoCodec}} is configured, we need to fall back > it to JCE implementation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10803) Update OpensslCipher#getInstance to accept CipherSuite#name format.
[ https://issues.apache.org/jira/browse/HADOOP-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10803. - Resolution: Fixed Hadoop Flags: Reviewed > Update OpensslCipher#getInstance to accept CipherSuite#name format. > --- > > Key: HADOOP-10803 > URL: https://issues.apache.org/jira/browse/HADOOP-10803 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) >Reporter: Yi Liu >Assignee: Yi Liu > Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) > > Attachments: HADOOP-10803.patch > > > The name format of {{org.apache.hadoop.crypto.CipherSuite}} is the same as > transformation of {{javax.crypto.Cipher#getInstance}}. > Let's update the {{OpensslCipher#getInstance}} to accept same format, then we > can get OpensslCipher instance using CipherSuite. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10768) Optimize Hadoop RPC encryption performance
Yi Liu created HADOOP-10768: --- Summary: Optimize Hadoop RPC encryption performance Key: HADOOP-10768 URL: https://issues.apache.org/jira/browse/HADOOP-10768 Project: Hadoop Common Issue Type: Improvement Components: performance, security Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Fix For: 3.0.0 Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for secure authentication and data protection. Even {{GSSAPI}} supports using AES, but without AES-NI support by default, so the encryption is slow and will become bottleneck. After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup. On the other hand, RPC message is small, but RPC is frequent and there may be lots of RPC calls in one connection, we needs to setup benchmark to see real improvement and then make a trade-off. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10735) Fall back AESCTRCryptoCodec implementation from OpenSSL to JCE if non native support.
Yi Liu created HADOOP-10735: --- Summary: Fall back AESCTRCryptoCodec implementation from OpenSSL to JCE if non native support. Key: HADOOP-10735 URL: https://issues.apache.org/jira/browse/HADOOP-10735 Project: Hadoop Common Issue Type: Sub-task Components: security Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) Reporter: Yi Liu Assignee: Yi Liu Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) If there is no native support or OpenSSL version is too low not supporting AES-CTR, but {{OpenSSLAESCTRCryptoCodec}} is configured, we need to fall back it to JCE implementation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10734) Implementation of Secure random using JNI to OpenSSL
Yi Liu created HADOOP-10734: --- Summary: Implementation of Secure random using JNI to OpenSSL Key: HADOOP-10734 URL: https://issues.apache.org/jira/browse/HADOOP-10734 Project: Hadoop Common Issue Type: Sub-task Components: security Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) Reporter: Yi Liu Assignee: Yi Liu Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) This JIRA is to implement Secure random using JNI to OpenSSL, and {{generateSecureRandom}} should be thread-safe. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10713) Refactor CryptoCodec#generateSecureRandom to take a byte[]
[ https://issues.apache.org/jira/browse/HADOOP-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10713. - Resolution: Fixed Fix Version/s: (was: 3.0.0) fs-encryption (HADOOP-10150 and HDFS-6134) Hadoop Flags: Reviewed > Refactor CryptoCodec#generateSecureRandom to take a byte[] > -- > > Key: HADOOP-10713 > URL: https://issues.apache.org/jira/browse/HADOOP-10713 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Trivial > Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) > > Attachments: HADOOP-10713.001.patch, HADOOP-10713.002.patch > > > Following suit with the Java Random implementations, it'd be better if we > switched CryptoCodec#generateSecureRandom to take a byte[] for parity. > Also, let's document that this method needs to be thread-safe, which is an > important consideration for CryptoCodec implementations. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10693) Implementation of AES-CTR CryptoCodec using JNI to OpenSSL
Yi Liu created HADOOP-10693: --- Summary: Implementation of AES-CTR CryptoCodec using JNI to OpenSSL Key: HADOOP-10693 URL: https://issues.apache.org/jira/browse/HADOOP-10693 Project: Hadoop Common Issue Type: Sub-task Components: security Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) Reporter: Yi Liu Assignee: Yi Liu Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) In HADOOP-10603, we have an implementation of AES-CTR CryptoCodec using Java JCE provider. To get high performance, the configured JCE provider should utilize native code and AES-NI, but in JDK6,7 the Java embedded provider doesn't support it. Considering not all hadoop user will use the provider like Diceros or able to get signed certificate from oracle to develop a custom provider, so this JIRA will have an implementation of AES-CTR CryptoCodec using JNI to OpenSSL directly. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10514) Common side changes to support HDFS extended attributes (HDFS-2006)
[ https://issues.apache.org/jira/browse/HADOOP-10514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10514. - Resolution: Fixed Hadoop Flags: Reviewed > Common side changes to support HDFS extended attributes (HDFS-2006) > > > Key: HADOOP-10514 > URL: https://issues.apache.org/jira/browse/HADOOP-10514 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: HDFS XAttrs (HDFS-2006) >Reporter: Uma Maheswara Rao G >Assignee: Yi Liu > > This is an umbrella issue for tracking all Hadoop Common changes required to > support HDFS extended attributes implementation -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10662) NullPointerException in CryptoInputStream while wrapped stream is not ByteBufferReadable. Add tests using normal stream.
[ https://issues.apache.org/jira/browse/HADOOP-10662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10662. - Resolution: Fixed Hadoop Flags: Reviewed Committed to branch. > NullPointerException in CryptoInputStream while wrapped stream is not > ByteBufferReadable. Add tests using normal stream. > > > Key: HADOOP-10662 > URL: https://issues.apache.org/jira/browse/HADOOP-10662 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) >Reporter: Yi Liu >Assignee: Yi Liu > Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) > > Attachments: HADOOP-10662.patch > > > NullPointerException in CryptoInputStream while wrapped stream is not > ByteBufferReadable. > Add tests for crypto streams using normal stream which does not support the > additional interfaces that the Hadoop FileSystem streams implement (Seekable, > PositionedReadable, ByteBufferReadable, HasFileDescriptor, CanSetDropBehind, > CanSetReadahead, HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10662) NullPointerException in CryptoInputStream while wrapped stream is not ByteBufferReadable. Add tests using normal stream.
Yi Liu created HADOOP-10662: --- Summary: NullPointerException in CryptoInputStream while wrapped stream is not ByteBufferReadable. Add tests using normal stream. Key: HADOOP-10662 URL: https://issues.apache.org/jira/browse/HADOOP-10662 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) Reporter: Yi Liu Assignee: Yi Liu Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) NullPointerException in CryptoInputStream while wrapped stream is not ByteBufferReadable. Add tests for crypto streams using normal stream which does not support the additional interfaces that the Hadoop FileSystem streams implement (Seekable, PositionedReadable, ByteBufferReadable, HasFileDescriptor, CanSetDropBehind, CanSetReadahead, HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10653) Add a new constructor for CryptoInputStream that receives current position of wrapped stream.
[ https://issues.apache.org/jira/browse/HADOOP-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10653. - Resolution: Fixed Hadoop Flags: Reviewed > Add a new constructor for CryptoInputStream that receives current position of > wrapped stream. > - > > Key: HADOOP-10653 > URL: https://issues.apache.org/jira/browse/HADOOP-10653 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) >Reporter: Yi Liu >Assignee: Yi Liu > Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) > > Attachments: HADOOP-10653.patch > > > Add a new constructor for {{CryptoInputStream}} that receives current > position of wrapped stream. > We need it for shuffle stream over http. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10653) Add a new constructor for CryptoInputStream that receives current position of wrapped stream.
Yi Liu created HADOOP-10653: --- Summary: Add a new constructor for CryptoInputStream that receives current position of wrapped stream. Key: HADOOP-10653 URL: https://issues.apache.org/jira/browse/HADOOP-10653 Project: Hadoop Common Issue Type: Task Components: security Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) Reporter: Yi Liu Assignee: Yi Liu Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) Add a new constructor for {{CryptoInputStream}} that receives current position of wrapped stream. In existing constructor, if the InputStream instanceof Seekable, we will get the absolute position using "getPos()" to set current position. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10635) Add a method to CryptoCodec to generate SRNs for IV
[ https://issues.apache.org/jira/browse/HADOOP-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10635. - Resolution: Fixed Hadoop Flags: Reviewed Committed to branch. Move the {{DEFAULT_SECURE_RANDOM_ALG}} to CommonConfigurationKeysPublic.java: {code} /** Defalt value for HADOOP_SECURITY_SECURE_RANDOM_ALGORITHM_KEY */ public static final String HADOOP_SECURITY_SECURE_RANDOM_ALGORITHM_DEFAULT = "SHA1PRNG"; {code} > Add a method to CryptoCodec to generate SRNs for IV > --- > > Key: HADOOP-10635 > URL: https://issues.apache.org/jira/browse/HADOOP-10635 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) >Reporter: Alejandro Abdelnur >Assignee: Yi Liu > Fix For: 3.0.0 > > Attachments: HADOOP-10635.1.patch, HADOOP-10635.patch > > > SRN generators are provided by crypto libraries. the CryptoCodec gives access > to a crypto library, thus it makes sense to expose the SRN generator on the > CryptoCodec API. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10632) Minor improvements to Crypto input and output streams
[ https://issues.apache.org/jira/browse/HADOOP-10632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10632. - Resolution: Fixed > Minor improvements to Crypto input and output streams > - > > Key: HADOOP-10632 > URL: https://issues.apache.org/jira/browse/HADOOP-10632 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) >Reporter: Alejandro Abdelnur >Assignee: Yi Liu > Fix For: 3.0.0 > > Attachments: HADOOP-10632.1.patch, HADOOP-10632.2.patch, > HADOOP-10632.3.patch, HADOOP-10632.4.patch, HADOOP-10632.patch > > > Minor follow up feedback on the crypto streams -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10617) Tests for Crypto input and output streams using fake streams implementing Hadoop streams interfaces.
[ https://issues.apache.org/jira/browse/HADOOP-10617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10617. - Resolution: Fixed Hadoop Flags: Reviewed Have merged this patch into HADOOP-10603 and Committed to branch. > Tests for Crypto input and output streams using fake streams implementing > Hadoop streams interfaces. > > > Key: HADOOP-10617 > URL: https://issues.apache.org/jira/browse/HADOOP-10617 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) >Reporter: Yi Liu >Assignee: Yi Liu > Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) > > Attachments: HADOOP-10617.1.patch, HADOOP-10617.2.patch, > HADOOP-10617.3.patch, HADOOP-10617.patch > > > Tests for Crypto input and output streams using fake input and output streams > implementing Hadoop streams interfaces. To cover functionality of Hadoop > streams with crypto. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10603) Crypto input and output streams implementing Hadoop stream interfaces
[ https://issues.apache.org/jira/browse/HADOOP-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10603. - Resolution: Fixed Target Version/s: fs-encryption (HADOOP-10150 and HDFS-6134) Hadoop Flags: Reviewed > Crypto input and output streams implementing Hadoop stream interfaces > - > > Key: HADOOP-10603 > URL: https://issues.apache.org/jira/browse/HADOOP-10603 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) >Reporter: Alejandro Abdelnur >Assignee: Yi Liu > Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) > > Attachments: CryptoInputStream.java, CryptoOutputStream.java, > HADOOP-10603.1.patch, HADOOP-10603.10.patch, HADOOP-10603.2.patch, > HADOOP-10603.3.patch, HADOOP-10603.4.patch, HADOOP-10603.5.patch, > HADOOP-10603.6.patch, HADOOP-10603.7.patch, HADOOP-10603.8.patch, > HADOOP-10603.9.patch, HADOOP-10603.patch > > > A common set of Crypto Input/Output streams. They would be used by > CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills. > Note we cannot use the JDK Cipher Input/Output streams directly because we > need to support the additional interfaces that the Hadoop FileSystem streams > implement (Seekable, PositionedReadable, ByteBufferReadable, > HasFileDescriptor, CanSetDropBehind, CanSetReadahead, > HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10628) Javadoc and few code style improvement for Crypto input and output streams
Yi Liu created HADOOP-10628: --- Summary: Javadoc and few code style improvement for Crypto input and output streams Key: HADOOP-10628 URL: https://issues.apache.org/jira/browse/HADOOP-10628 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) Reporter: Yi Liu Assignee: Yi Liu Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) There are some additional comments from [~clamb] related to javadoc and few code style on HADOOP-10603, let's fix them in this follow-on JIRA. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10617) Tests for Crypto input and output streams using fake streams implementing Hadoop streams interfaces.
Yi Liu created HADOOP-10617: --- Summary: Tests for Crypto input and output streams using fake streams implementing Hadoop streams interfaces. Key: HADOOP-10617 URL: https://issues.apache.org/jira/browse/HADOOP-10617 Project: Hadoop Common Issue Type: Test Components: security Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) Reporter: Yi Liu Assignee: Yi Liu Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) 1. Test crypto reading with different buffer size. 2. Test hflush/hsync of crypto output stream, and with different buffer size. 3. Test positioned read. 4. Test seek to different position. 5. Test get position. 6. Test skip. 7. Test byte buffer read with different buffer size. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10151) Implement a Buffer-Based Chiper InputStream and OutputStream
[ https://issues.apache.org/jira/browse/HADOOP-10151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10151. - Resolution: Won't Fix > Implement a Buffer-Based Chiper InputStream and OutputStream > > > Key: HADOOP-10151 > URL: https://issues.apache.org/jira/browse/HADOOP-10151 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: 3.0.0 >Reporter: Yi Liu >Assignee: Yi Liu > Labels: rhino > Fix For: 3.0.0 > > Attachments: HADOOP-10151.patch > > > Cipher InputStream and OuputStream are buffer-based, and the buffer is used > to cache the encrypted data or result. Cipher InputStream is used to read > encrypted data, and the result is plain text . Cipher OutputStream is used to > write plain data and result is encrypted data. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10153) Define Crypto policy interfaces and provide its default implementation.
[ https://issues.apache.org/jira/browse/HADOOP-10153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10153. - Resolution: Won't Fix > Define Crypto policy interfaces and provide its default implementation. > --- > > Key: HADOOP-10153 > URL: https://issues.apache.org/jira/browse/HADOOP-10153 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: 3.0.0 >Reporter: Yi Liu >Assignee: Yi Liu > Labels: rhino > Fix For: 3.0.0 > > > The JIRA defines crypto policy interface, developers/users can implement > their own crypto policy to decide how files/directories are encrypted. This > JIRA also includes a default implementation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10154) Provide cryptographic filesystem implementation and it's data IO.
[ https://issues.apache.org/jira/browse/HADOOP-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10154. - Resolution: Won't Fix > Provide cryptographic filesystem implementation and it's data IO. > - > > Key: HADOOP-10154 > URL: https://issues.apache.org/jira/browse/HADOOP-10154 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: 3.0.0 >Reporter: Yi Liu >Assignee: Yi Liu > Labels: rhino > Fix For: 3.0.0 > > > The JIRA includes Cryptographic filesystem data InputStream which extends > FSDataInputStream and OutputStream which extends FSDataOutputStream. > Implantation of Cryptographic file system is also included in this JIRA. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10155) Hadoop-crypto which includes native cipher implementation.
[ https://issues.apache.org/jira/browse/HADOOP-10155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10155. - Resolution: Won't Fix > Hadoop-crypto which includes native cipher implementation. > --- > > Key: HADOOP-10155 > URL: https://issues.apache.org/jira/browse/HADOOP-10155 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: 3.0.0 >Reporter: Yi Liu >Assignee: Yi Liu > Labels: rhino > Fix For: 3.0.0 > > > Native cipher is used to improve performance, when using OpenSSL and with > AES-NI enabled, Native cipher is 20x faster than Java cipher, for example > CBC/CTR mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10152) Distributed file cipher InputStream and OutputStream which provide 1:1 mapping of plain text data and cipher data.
[ https://issues.apache.org/jira/browse/HADOOP-10152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10152. - Resolution: Duplicate > Distributed file cipher InputStream and OutputStream which provide 1:1 > mapping of plain text data and cipher data. > -- > > Key: HADOOP-10152 > URL: https://issues.apache.org/jira/browse/HADOOP-10152 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: 3.0.0 >Reporter: Yi Liu >Assignee: Yi Liu > Labels: rhino > Fix For: 3.0.0 > > > To be easily seek and positioned read distributed file, the length of > encrypted file should be the same as the length of plain file, and the > positions have 1:1 mapping. So in this JIRA we defines distributed file > cipher InputStream(FSDecryptorStream) and OutputStream(FSEncryptorStream). > The distributed file cipher InputStream is seekable and positonedReadable. > This JIRA is different from HADOOP-10151, the file may be read and written > many times and on multiple nodes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10575) Small fixes for XAttrCommands and test.
Yi Liu created HADOOP-10575: --- Summary: Small fixes for XAttrCommands and test. Key: HADOOP-10575 URL: https://issues.apache.org/jira/browse/HADOOP-10575 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: HDFS XAttrs (HDFS-2006) Reporter: Yi Liu Assignee: Yi Liu Priority: Minor Fix For: HDFS XAttrs (HDFS-2006) Small fixes for XAttrCommands and test. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10567) Shift XAttr value encoding code out for reusing.
Yi Liu created HADOOP-10567: --- Summary: Shift XAttr value encoding code out for reusing. Key: HADOOP-10567 URL: https://issues.apache.org/jira/browse/HADOOP-10567 Project: Hadoop Common Issue Type: Improvement Components: fs Affects Versions: HDFS XAttrs (HDFS-2006) Reporter: Yi Liu Assignee: Yi Liu Priority: Minor Fix For: HDFS XAttrs (HDFS-2006) XAttr value encoding(encode byte[] to string, hex string or base64 string for better display and input) is common, can be reused. It can be used by FsShell, in http request as parameter and json response. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10521) FsShell commands for extended attributes.
Yi Liu created HADOOP-10521: --- Summary: FsShell commands for extended attributes. Key: HADOOP-10521 URL: https://issues.apache.org/jira/browse/HADOOP-10521 Project: Hadoop Common Issue Type: New Feature Components: fs Reporter: Yi Liu Assignee: Yi Liu Attachments: HADOOP-10521.patch “setfattr” and “getfattr” commands are added to FsShell for XAttr, and these are the same as in Linux. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10520) Extended attributes definition and FileSystem APIs for extended attributes.
Yi Liu created HADOOP-10520: --- Summary: Extended attributes definition and FileSystem APIs for extended attributes. Key: HADOOP-10520 URL: https://issues.apache.org/jira/browse/HADOOP-10520 Project: Hadoop Common Issue Type: New Feature Components: fs Reporter: Yi Liu Assignee: Yi Liu Fix For: 3.0.0 This JIRA defines XAttr (Extended Attribute), it consists of a name and associated data, and 4 namespaces are defined: user, trusted, security and system. FileSystem APIs for XAttr include setXAttrs, getXAttrs, removeXAttrs and so on. For more information, please refer to HDFS-2006. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10156) Define Buffer-based Encryptor/Decryptor interfaces and provide implementation for AES CTR.
Yi Liu created HADOOP-10156: --- Summary: Define Buffer-based Encryptor/Decryptor interfaces and provide implementation for AES CTR. Key: HADOOP-10156 URL: https://issues.apache.org/jira/browse/HADOOP-10156 Project: Hadoop Common Issue Type: Sub-task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Fix For: 3.0.0 -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Created] (HADOOP-10155) Hadoop-crypto which includes native cipher implementation.
Yi Liu created HADOOP-10155: --- Summary: Hadoop-crypto which includes native cipher implementation. Key: HADOOP-10155 URL: https://issues.apache.org/jira/browse/HADOOP-10155 Project: Hadoop Common Issue Type: Sub-task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Fix For: 3.0.0 -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Created] (HADOOP-10154) Provide cryptographic filesystem implementation and it's data IO.
Yi Liu created HADOOP-10154: --- Summary: Provide cryptographic filesystem implementation and it's data IO. Key: HADOOP-10154 URL: https://issues.apache.org/jira/browse/HADOOP-10154 Project: Hadoop Common Issue Type: Sub-task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Fix For: 3.0.0 -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Created] (HADOOP-10153) Define Crypto policy interfaces and provide its default implementation.
Yi Liu created HADOOP-10153: --- Summary: Define Crypto policy interfaces and provide its default implementation. Key: HADOOP-10153 URL: https://issues.apache.org/jira/browse/HADOOP-10153 Project: Hadoop Common Issue Type: Sub-task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Fix For: 3.0.0 -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Created] (HADOOP-10152) Distributed file cipher InputStream and OutputStream which provide 1:1 mapping of plain text data and cipher data.
Yi Liu created HADOOP-10152: --- Summary: Distributed file cipher InputStream and OutputStream which provide 1:1 mapping of plain text data and cipher data. Key: HADOOP-10152 URL: https://issues.apache.org/jira/browse/HADOOP-10152 Project: Hadoop Common Issue Type: Sub-task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Fix For: 3.0.0 -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Created] (HADOOP-10151) Implement a Buffer-Based Chiper InputStream and OutPutStream
Yi Liu created HADOOP-10151: --- Summary: Implement a Buffer-Based Chiper InputStream and OutPutStream Key: HADOOP-10151 URL: https://issues.apache.org/jira/browse/HADOOP-10151 Project: Hadoop Common Issue Type: Sub-task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Fix For: 3.0.0 -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Created] (HADOOP-9838) Token Implementation for HAS
Yi Liu created HADOOP-9838: -- Summary: Token Implementation for HAS Key: HADOOP-9838 URL: https://issues.apache.org/jira/browse/HADOOP-9838 Project: Hadoop Common Issue Type: Task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu This issue is Token Implementation for HAS. We will implement Identity token and Access Token. Identity token is gotten after client is authenticated and is issued by Identity token service and is required when client requests access token. Access token is issued by Authorization service, and is used when client accesses Hadoop service. In this GIRA, we'll • Implement Identity Token: contains identity attributes of client, it's signed and can be verified, and it's valid in lifecycle. • Implement Access Token: contains more attributes which can be used for authorization, it's signed and can be verified, and it's valid in lifecycle. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9839) Authentication Service for HAS
Yi Liu created HADOOP-9839: -- Summary: Authentication Service for HAS Key: HADOOP-9839 URL: https://issues.apache.org/jira/browse/HADOOP-9839 Project: Hadoop Common Issue Type: Task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu In this JIRA, we will implement Authentication Service and Identity Token Service for HAS, and we will also implement some built-in authentication modules, such as LDAP/AD, Kerberos and so on. Clients authenticates with Authentication services and gets authentication result. Identity Token Service issues Identity Token. The scope of this task is highlighted as following: • Implement Authentication service defined in TokenAuth framework. Authentication service supplies authentication framework and several built-in authentication modules implementation, and customer can also have their authentication module implementation and plug into Authentication service. • Implement Identity Token service. It receives authentication result and issues identity token. • Implement Authentication management facility. • Implement LDAP/AD authentication login module, client can login using LDAP/AD account. • Implement Kerberos authentication login module. • Implement some web SSO login module, such as SAML2 login module. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9837) Hadoop Token Command
Yi Liu created HADOOP-9837: -- Summary: Hadoop Token Command Key: HADOOP-9837 URL: https://issues.apache.org/jira/browse/HADOOP-9837 Project: Hadoop Common Issue Type: Task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu This JIRA is to define commands for Hadoop token. The scope of this task is highlighted as following: • Token init: authenticate and request an identity token, then persist the token in token cache for later reuse. • Token display: show the existing token with its info and attributes in the token cache. • Token revoke: revoke a token so that the token will no longer be valid and cannot be used later. • Token renew: extend the lifecycle of a token before it’s expired. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9836) Token definition and API
Yi Liu created HADOOP-9836: -- Summary: Token definition and API Key: HADOOP-9836 URL: https://issues.apache.org/jira/browse/HADOOP-9836 Project: Hadoop Common Issue Type: Task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu We need to define common token attributes and APIs for TokenAuth framework which makes the arbitrary token format can be adopted into the framework. This JIRA is a sub-task of TokenAuth framework. Common token properties, APIs and facilities that identity/access token requires will be defined. In this JIRA, we'll: • Define Token generation API, includes Token serialization/deserialization, Token encryption/sign and Token revoke/expire/renew. • Define Token validation API, includes Token decryption/verify and Token check(timestamp, audience, etc) • Define Token Attribute API, includes attributes setting, query and so on. • Define required attributes and optional attributes for identity token and access token. • Implement Token Utilities, such as print/debug. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9835) Identity Token Service API
Yi Liu created HADOOP-9835: -- Summary: Identity Token Service API Key: HADOOP-9835 URL: https://issues.apache.org/jira/browse/HADOOP-9835 Project: Hadoop Common Issue Type: Task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu Identity token service is used by client to request identity token with authentication result after client authenticates with authentication service. This JIRA is to define Identity token service API, and the pluggable framework allows different implementation. The scope of this task is highlighted as following: • Define Identity token service API. • Specify how to configure and register Identity token service implementation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9834) Authentication Service API
Yi Liu created HADOOP-9834: -- Summary: Authentication Service API Key: HADOOP-9834 URL: https://issues.apache.org/jira/browse/HADOOP-9834 Project: Hadoop Common Issue Type: Task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu Authentication service is used to authenticate users and services. The authentication result can then be used to request identity token service to get an identity token. The JIRA is to define Authentication service API, and the pluggable framework allows different implementation. The scope of this task is highlighted as following: • Define Authentication service API. • Define Authentication module API for both server and client. • Define Authentication modules management API for both server and client. • Define Protocol and procedure for authn module negotiation between client and server. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira