[jira] [Updated] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Da Zhou updated HADOOP-15407: - Attachment: HADOOP-15407-HADOOP-15407.008.patch > Support Windows Azure Storage - Blob file system in Hadoop > -- > > Key: HADOOP-15407 > URL: https://issues.apache.org/jira/browse/HADOOP-15407 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Esfandiar Manii >Assignee: Esfandiar Manii >Priority: Major > Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, > HADOOP-15407-003.patch, HADOOP-15407-004.patch, > HADOOP-15407-HADOOP-15407.006.patch, HADOOP-15407-HADOOP-15407.007.patch, > HADOOP-15407-HADOOP-15407.008.patch, HADOOP-15407-patch-atop-patch-007.patch > > > *{color:#212121}Description{color}* > This JIRA adds a new file system implementation, ABFS, for running Big Data > and Analytics workloads against Azure Storage. This is a complete rewrite of > the previous WASB driver with a heavy focus on optimizing both performance > and cost. > {color:#212121} {color} > *{color:#212121}High level design{color}* > At a high level, the code here extends the FileSystem class to provide an > implementation for accessing blobs in Azure Storage. The scheme abfs is used > for accessing it over HTTP, and abfss for accessing over HTTPS. The following > URI scheme is used to address individual paths: > {color:#212121} {color} > > {color:#212121}abfs[s]://@.dfs.core.windows.net/{color} > {color:#212121} {color} > {color:#212121}ABFS is intended as a replacement to WASB. WASB is not > deprecated but is in pure maintenance mode and customers should upgrade to > ABFS once it hits General Availability later in CY18.{color} > {color:#212121}Benefits of ABFS include:{color} > {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big > Data and Analytics workloads by allowing higher limits on storage > accounts{color} > {color:#212121}· Removing any ramp up time with Storage backend > partitioning; blocks are now automatically sharded across partitions in the > Storage backend{color} > {color:#212121} . This avoids the need for using > temporary/intermediate files, increasing the cost (and framework complexity > around committing jobs/tasks){color} > {color:#212121}· Enabling much higher read and write throughput on > single files (tens of Gbps by default){color} > {color:#212121}· Still retaining all of the Azure Blob features > customers are familiar with and expect, and gaining the benefits of future > Blob features as well{color} > {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the > file system throughput and operations. Ambari metrics are not currently > implemented for ABFS, but will be available soon.{color} > {color:#212121} {color} > *{color:#212121}Credits and history{color}* > Credit for this work goes to (hope I don't forget anyone): Shane Mainali, > {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar > Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, > and James Baker. {color} > {color:#212121} {color} > *Test* > ABFS has gone through many test procedures including Hadoop file system > contract tests, unit testing, functional testing, and manual testing. All the > Junit tests provided with the driver are capable of running in both > sequential/parallel fashion in order to reduce the testing time. > {color:#212121}Besides unit tests, we have used ABFS as the default file > system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a > storage option. (HDFS is also used but not as default file system.) Various > different customer and test workloads have been run against clusters with > such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, > Spark Streaming and Spark SQL, and others have been run to do scenario, > performance, and functional testing. Third parties and customers have also > done various testing of ABFS.{color} > {color:#212121}The current version reflects to the version of the code > tested and used in our production environment.{color} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509107#comment-16509107 ] Da Zhou commented on HADOOP-15407: -- Submitting HADOOP-15407-HADOOP-15407.008.patch, all ABFS tests passed against my storage account in west US. Updates in the patch: - Resolved findbugs violations - Resolved checkstyle violations - Added missing java docs - Updated AzureBlobFileSystemException as a subclass of IOE and updated exception check - Reinstate wasb contract tests to parallel runs, enable the parallel runs for ABFS contract tests - Updated names of Service injection interface and impl with Azure specific names - Replaced loggingService with SL4J {noformat} mvn -T 1C -Dparallel-tests -DtestsThreadCount=8 clean verify [INFO] --- maven-antrun-plugin:1.7:run (create-parallel-tests-dirs) @ hadoop-azure --- [INFO] Executing tasks main: [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test-dir/1 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test-dir/2 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test-dir/3 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test-dir/4 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test-dir/5 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test-dir/6 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test-dir/7 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test-dir/8 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test/1 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test/2 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test/3 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test/4 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test/5 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test/6 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test/7 [mkdir] Created dir: /home/zhoda/dev/Projects/apache-hadoop/hadoop/hadoop-tools/hadoop-azure/target/test/8 [INFO] Executed tasks [INFO] [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-azure --- [INFO] [INFO] --- [INFO] T E S T S [INFO] --- [INFO] Running org.apache.hadoop.fs.azure.TestShellDecryptionKeyProvider [INFO] Running org.apache.hadoop.fs.azure.TestBlobMetadata [INFO] Running org.apache.hadoop.fs.azure.TestWasbFsck [INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked [INFO] Running org.apache.hadoop.fs.azure.TestOutOfBandAzureBlobOperations [INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemUploadLogic [INFO] Running org.apache.hadoop.fs.azure.TestClientThrottlingAnalyzer [INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked [WARNING] Tests run: 3, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 1.098 s - in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemUploadLogic [WARNING] Tests run: 2, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 1.934 s - in org.apache.hadoop.fs.azure.TestShellDecryptionKeyProvider [INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency [INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked [WARNING] Tests run: 2, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.097 s - in org.apache.hadoop.fs.azure.TestWasbFsck [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.808 s - in org.apache.hadoop.fs.azure.TestBlobMetadata [INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.033 s - in org.apache.hadoop.fs.azure.TestOutOfBandAzureBlobOperations [INFO] Running org.apache.hadoop.fs.azure.metrics.TestNativeAzureFileSystemMetricsSystem [INFO] Running org.apache.hadoop.fs.azure.metrics.TestBandwidthGaugeUpdater [INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAuthorization [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.308 s - in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.594 s - in
[jira] [Assigned] (HADOOP-15530) RPC could stuck at senderFuture.get()
[ https://issues.apache.org/jira/browse/HADOOP-15530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang reassigned HADOOP-15530: -- Assignee: Yongjun Zhang > RPC could stuck at senderFuture.get() > - > > Key: HADOOP-15530 > URL: https://issues.apache.org/jira/browse/HADOOP-15530 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Major > > In Client.java, sendRpcRequest does the following > {code} >/** Initiates a rpc call by sending the rpc request to the remote server. > * Note: this is not called from the Connection thread, but by other > * threads. > * @param call - the rpc request > */ > public void sendRpcRequest(final Call call) > throws InterruptedException, IOException { > if (shouldCloseConnection.get()) { > return; > } > // Serialize the call to be sent. This is done from the actual > // caller thread, rather than the sendParamsExecutor thread, > // so that if the serialization throws an error, it is reported > // properly. This also parallelizes the serialization. > // > // Format of a call on the wire: > // 0) Length of rest below (1 + 2) > // 1) RpcRequestHeader - is serialized Delimited hence contains length > // 2) RpcRequest > // > // Items '1' and '2' are prepared here. > RpcRequestHeaderProto header = ProtoUtil.makeRpcRequestHeader( > call.rpcKind, OperationProto.RPC_FINAL_PACKET, call.id, call.retry, > clientId); > final ResponseBuffer buf = new ResponseBuffer(); > header.writeDelimitedTo(buf); > RpcWritable.wrap(call.rpcRequest).writeTo(buf); > synchronized (sendRpcRequestLock) { > Future senderFuture = sendParamsExecutor.submit(new Runnable() { > @Override > public void run() { > try { > synchronized (ipcStreams.out) { > if (shouldCloseConnection.get()) { > return; > } > if (LOG.isDebugEnabled()) { > LOG.debug(getName() + " sending #" + call.id > + " " + call.rpcRequest); > } > // RpcRequestHeader + RpcRequest > ipcStreams.sendRequest(buf.toByteArray()); > ipcStreams.flush(); > } > } catch (IOException e) { > // exception at this point would leave the connection in an > // unrecoverable state (eg half a call left on the wire). > // So, close the connection, killing any outstanding calls > markClosed(e); > } finally { > //the buffer is just an in-memory buffer, but it is still > polite to > // close early > IOUtils.closeStream(buf); > } > } > }); > try { > senderFuture.get(); > } catch (ExecutionException e) { > Throwable cause = e.getCause(); > // cause should only be a RuntimeException as the Runnable above > // catches IOException > if (cause instanceof RuntimeException) { > throw (RuntimeException) cause; > } else { > throw new RuntimeException("unexpected checked exception", cause); > } > } > } > } > {code} > It's observed that the call can be stuck at {{senderFuture.get();}} > Given that we support rpcTimeOut, we could chose the second method of Future > below: > {code} > /** > * Waits if necessary for the computation to complete, and then > * retrieves its result. > * > * @return the computed result > * @throws CancellationException if the computation was cancelled > * @throws ExecutionException if the computation threw an > * exception > * @throws InterruptedException if the current thread was interrupted > * while waiting > */ > V get() throws InterruptedException, ExecutionException; > /** > * Waits if necessary for at most the given time for the computation > * to complete, and then retrieves its result, if available. > * > * @param timeout the maximum time to wait > * @param unit the time unit of the timeout argument > * @return the computed result > * @throws CancellationException if the computation was cancelled > * @throws ExecutionException if the computation threw an > * exception > * @throws InterruptedException if the current thread was interrupted > * while waiting > * @throws TimeoutException if the wait timed out > */ > V get(long timeout, TimeUnit unit) > throws
[jira] [Updated] (HADOOP-15530) RPC could stuck at senderFuture.get()
[ https://issues.apache.org/jira/browse/HADOOP-15530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang updated HADOOP-15530: --- Description: In Client.java, sendRpcRequest does the following {code} /** Initiates a rpc call by sending the rpc request to the remote server. * Note: this is not called from the Connection thread, but by other * threads. * @param call - the rpc request */ public void sendRpcRequest(final Call call) throws InterruptedException, IOException { if (shouldCloseConnection.get()) { return; } // Serialize the call to be sent. This is done from the actual // caller thread, rather than the sendParamsExecutor thread, // so that if the serialization throws an error, it is reported // properly. This also parallelizes the serialization. // // Format of a call on the wire: // 0) Length of rest below (1 + 2) // 1) RpcRequestHeader - is serialized Delimited hence contains length // 2) RpcRequest // // Items '1' and '2' are prepared here. RpcRequestHeaderProto header = ProtoUtil.makeRpcRequestHeader( call.rpcKind, OperationProto.RPC_FINAL_PACKET, call.id, call.retry, clientId); final ResponseBuffer buf = new ResponseBuffer(); header.writeDelimitedTo(buf); RpcWritable.wrap(call.rpcRequest).writeTo(buf); synchronized (sendRpcRequestLock) { Future senderFuture = sendParamsExecutor.submit(new Runnable() { @Override public void run() { try { synchronized (ipcStreams.out) { if (shouldCloseConnection.get()) { return; } if (LOG.isDebugEnabled()) { LOG.debug(getName() + " sending #" + call.id + " " + call.rpcRequest); } // RpcRequestHeader + RpcRequest ipcStreams.sendRequest(buf.toByteArray()); ipcStreams.flush(); } } catch (IOException e) { // exception at this point would leave the connection in an // unrecoverable state (eg half a call left on the wire). // So, close the connection, killing any outstanding calls markClosed(e); } finally { //the buffer is just an in-memory buffer, but it is still polite to // close early IOUtils.closeStream(buf); } } }); try { senderFuture.get(); } catch (ExecutionException e) { Throwable cause = e.getCause(); // cause should only be a RuntimeException as the Runnable above // catches IOException if (cause instanceof RuntimeException) { throw (RuntimeException) cause; } else { throw new RuntimeException("unexpected checked exception", cause); } } } } {code} It's observed that the call can be stuck at {{senderFuture.get();}} Given that we support rpcTimeOut, we could chose the second method of Future below: {code} /** * Waits if necessary for the computation to complete, and then * retrieves its result. * * @return the computed result * @throws CancellationException if the computation was cancelled * @throws ExecutionException if the computation threw an * exception * @throws InterruptedException if the current thread was interrupted * while waiting */ V get() throws InterruptedException, ExecutionException; /** * Waits if necessary for at most the given time for the computation * to complete, and then retrieves its result, if available. * * @param timeout the maximum time to wait * @param unit the time unit of the timeout argument * @return the computed result * @throws CancellationException if the computation was cancelled * @throws ExecutionException if the computation threw an * exception * @throws InterruptedException if the current thread was interrupted * while waiting * @throws TimeoutException if the wait timed out */ V get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException; {code} In theory, since the RPC at client is serialized, we could just use the main thread to do the execution, instead of using a threadpool to create new thread. This can be discussed in a separate jira. And why the RPC is not processed and returned by NN is another topic. was: In Client.java, sendRpcRequest does the following {code} /** Initiates a rpc call by sending the rpc request to the remote server. * Note: this is not called from the Connection thread, but by other
[jira] [Created] (HADOOP-15531) Use commons-text instead of commons-lang for some classes
Takanobu Asanuma created HADOOP-15531: - Summary: Use commons-text instead of commons-lang for some classes Key: HADOOP-15531 URL: https://issues.apache.org/jira/browse/HADOOP-15531 Project: Hadoop Common Issue Type: Improvement Reporter: Takanobu Asanuma Assignee: Takanobu Asanuma After upgrading commons-lang from 2.6 to 3.7, some classes such as \{{StringEscapeUtils}} and \{{WordUtils}} become deprecated and move to commons-text. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15529) ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in Windows
[ https://issues.apache.org/jira/browse/HADOOP-15529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509043#comment-16509043 ] Íñigo Goiri edited comment on HADOOP-15529 at 6/12/18 1:31 AM: --- All the unit tests seem to pass [here|https://builds.apache.org/job/PreCommit-HADOOP-Build/14756/testReport/org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher/TestContainersLauncher/]. Can you fix the ternary operator? was (Author: elgoiri): All the unit tests seem to pass [here|https://builds.apache.org/job/PreCommit-HADOOP-Build/14756/testReport/org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher/TestContainersLauncher/]. +1 on [^HADOOP-15529.v1.patch]. Committing all the way to branch-2.9. > ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in > Windows > -- > > Key: HADOOP-15529 > URL: https://issues.apache.org/jira/browse/HADOOP-15529 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Attachments: HADOOP-15529.v1.patch > > > YARN-5219 introduced 2 unit tests designed for Unix. They currently failing > in Windows. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15530) RPC could stuck at senderFuture.get()
Yongjun Zhang created HADOOP-15530: -- Summary: RPC could stuck at senderFuture.get() Key: HADOOP-15530 URL: https://issues.apache.org/jira/browse/HADOOP-15530 Project: Hadoop Common Issue Type: Bug Components: common Reporter: Yongjun Zhang In Client.java, sendRpcRequest does the following {code} /** Initiates a rpc call by sending the rpc request to the remote server. * Note: this is not called from the Connection thread, but by other * threads. * @param call - the rpc request */ public void sendRpcRequest(final Call call) throws InterruptedException, IOException { if (shouldCloseConnection.get()) { return; } // Serialize the call to be sent. This is done from the actual // caller thread, rather than the sendParamsExecutor thread, // so that if the serialization throws an error, it is reported // properly. This also parallelizes the serialization. // // Format of a call on the wire: // 0) Length of rest below (1 + 2) // 1) RpcRequestHeader - is serialized Delimited hence contains length // 2) RpcRequest // // Items '1' and '2' are prepared here. RpcRequestHeaderProto header = ProtoUtil.makeRpcRequestHeader( call.rpcKind, OperationProto.RPC_FINAL_PACKET, call.id, call.retry, clientId); final ResponseBuffer buf = new ResponseBuffer(); header.writeDelimitedTo(buf); RpcWritable.wrap(call.rpcRequest).writeTo(buf); synchronized (sendRpcRequestLock) { Future senderFuture = sendParamsExecutor.submit(new Runnable() { @Override public void run() { try { synchronized (ipcStreams.out) { if (shouldCloseConnection.get()) { return; } if (LOG.isDebugEnabled()) { LOG.debug(getName() + " sending #" + call.id + " " + call.rpcRequest); } // RpcRequestHeader + RpcRequest ipcStreams.sendRequest(buf.toByteArray()); ipcStreams.flush(); } } catch (IOException e) { // exception at this point would leave the connection in an // unrecoverable state (eg half a call left on the wire). // So, close the connection, killing any outstanding calls markClosed(e); } finally { //the buffer is just an in-memory buffer, but it is still polite to // close early IOUtils.closeStream(buf); } } }); try { senderFuture.get(); } catch (ExecutionException e) { Throwable cause = e.getCause(); // cause should only be a RuntimeException as the Runnable above // catches IOException if (cause instanceof RuntimeException) { throw (RuntimeException) cause; } else { throw new RuntimeException("unexpected checked exception", cause); } } } } {code} It's observed that the call can be stuck at {{senderFuture.get();}} Given that we support rpcTimeOut, we could chose the second method of Future below: {code} /** * Waits if necessary for the computation to complete, and then * retrieves its result. * * @return the computed result * @throws CancellationException if the computation was cancelled * @throws ExecutionException if the computation threw an * exception * @throws InterruptedException if the current thread was interrupted * while waiting */ V get() throws InterruptedException, ExecutionException; /** * Waits if necessary for at most the given time for the computation * to complete, and then retrieves its result, if available. * * @param timeout the maximum time to wait * @param unit the time unit of the timeout argument * @return the computed result * @throws CancellationException if the computation was cancelled * @throws ExecutionException if the computation threw an * exception * @throws InterruptedException if the current thread was interrupted * while waiting * @throws TimeoutException if the wait timed out */ V get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException; {code} In theory, since the RPC at client is serialized, we could just use the main thread to do the execution, instead of using a threadpool to create new thread. This can be discussed in a separate jira. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To
[jira] [Commented] (HADOOP-15527) Sometimes daemons keep running even after "kill -9" from daemon-stop script
[ https://issues.apache.org/jira/browse/HADOOP-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509044#comment-16509044 ] Eric Yang commented on HADOOP-15527: In JDK 8, there is a new feature to control OS processes, notably destroyForcibly method. However, this tooling is somewhat OS dependent. It is best effort to terminate child processes. This could leave dangling child processes around until the child processes are notified of parent process is shutting down. When kill -9 is executed, ps -p output may still contain the list of child threads and this is mistaken for parent process is still alive. Java 9 has another set of improvement around this area, which has a blog written for [process handling|https://javax0.wordpress.com/2017/07/19/process-handling-in-java-9/]. That might improve the child process handling. For Hadoop shell script improvement, we probably want to make sure that child thread is not listed for ps -p or use -f /proc/[pid] to identify the liveness of the process, and implement a loop for the check to ensure the process is gone before script exit. > Sometimes daemons keep running even after "kill -9" from daemon-stop script > --- > > Key: HADOOP-15527 > URL: https://issues.apache.org/jira/browse/HADOOP-15527 > Project: Hadoop Common > Issue Type: Bug >Reporter: Vinod Kumar Vavilapalli >Assignee: Vinod Kumar Vavilapalli >Priority: Major > > I'm seeing that sometimes daemons keep running for a little while even after > "kill -9" from daemon-stop scripts. > Debugging more, I see several instances of "ERROR: Unable to kill ${pid}". > Saw this specifically with ResourceManager & NodeManager - {{yarn --daemon > stop nodemanager}}. Though it is possible that other daemons may run into > this too. > Saw this on both Centos as well as Ubuntu. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15529) ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in Windows
[ https://issues.apache.org/jira/browse/HADOOP-15529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509043#comment-16509043 ] Íñigo Goiri commented on HADOOP-15529: -- All the unit tests seem to pass [here|https://builds.apache.org/job/PreCommit-HADOOP-Build/14756/testReport/org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher/TestContainersLauncher/]. +1 on [^HADOOP-15529.v1.patch]. Committing all the way to branch-2.9. > ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in > Windows > -- > > Key: HADOOP-15529 > URL: https://issues.apache.org/jira/browse/HADOOP-15529 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Attachments: HADOOP-15529.v1.patch > > > YARN-5219 introduced 2 unit tests designed for Unix. They currently failing > in Windows. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15529) ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in Windows
[ https://issues.apache.org/jira/browse/HADOOP-15529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15529: - Issue Type: Bug (was: Sub-task) Parent: (was: HADOOP-15461) > ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in > Windows > -- > > Key: HADOOP-15529 > URL: https://issues.apache.org/jira/browse/HADOOP-15529 > Project: Hadoop Common > Issue Type: Bug >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Attachments: HADOOP-15529.v1.patch > > > YARN-5219 introduced 2 unit tests designed for Unix. They currently failing > in Windows. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15529) ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in Windows
[ https://issues.apache.org/jira/browse/HADOOP-15529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15529: - Issue Type: Sub-task (was: Bug) Parent: HADOOP-15475 > ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in > Windows > -- > > Key: HADOOP-15529 > URL: https://issues.apache.org/jira/browse/HADOOP-15529 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Attachments: HADOOP-15529.v1.patch > > > YARN-5219 introduced 2 unit tests designed for Unix. They currently failing > in Windows. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509021#comment-16509021 ] genericqa commented on HADOOP-14178: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 258 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 59s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 31m 20s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-hdfs-project/hadoop-hdfs-native-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-mapreduce-project hadoop-client-modules/hadoop-client-minicluster . hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 19s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 50s{color} | {color:red} server-scm in trunk failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 23s{color} | {color:red} hadoop-ozone in trunk failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 20s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 19s{color} | {color:red} integration-test in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 14s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 27m 12s{color} | {color:red} root generated 15 new + 1487 unchanged - 0 fixed = 1502 total (was 1487) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 54s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 3s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-hdfs-project/hadoop-hdfs-native-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-mapreduce-project hadoop-client-modules/hadoop-client-minicluster . hadoop-ozone/integration-test
[jira] [Commented] (HADOOP-15529) ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in Windows
[ https://issues.apache.org/jira/browse/HADOOP-15529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509020#comment-16509020 ] genericqa commented on HADOOP-15529: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 44s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 73m 47s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HADOOP-15529 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12927403/HADOOP-15529.v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 94338e365249 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 23bfd9f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14756/testReport/ | | Max. process+thread count | 408 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14756/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. >
[jira] [Commented] (HADOOP-15504) Upgrade Maven and Maven Wagon versions
[ https://issues.apache.org/jira/browse/HADOOP-15504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508996#comment-16508996 ] genericqa commented on HADOOP-15504: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 30m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 18m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 32s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}159m 33s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}339m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema | | | hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity | | | hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps | | |
[jira] [Commented] (HADOOP-15529) ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in Windows
[ https://issues.apache.org/jira/browse/HADOOP-15529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508965#comment-16508965 ] Íñigo Goiri commented on HADOOP-15529: -- I agree with the fix in [^HADOOP-15529.v1.patch]. Can we make the ternary operator a full if? It is easier to read that way. The unit tests are failing on Windows in the daily build too: [here|https://builds.apache.org/job/hadoop-trunk-win/494/testReport/org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher/TestContainerLaunch/]. I would also make this part of HADOOP-15475. > ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in > Windows > -- > > Key: HADOOP-15529 > URL: https://issues.apache.org/jira/browse/HADOOP-15529 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Attachments: HADOOP-15529.v1.patch > > > YARN-5219 introduced 2 unit tests designed for Unix. They currently failing > in Windows. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15493) DiskChecker should handle disk full situation
[ https://issues.apache.org/jira/browse/HADOOP-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508957#comment-16508957 ] Wangda Tan commented on HADOOP-15493: - Bulk update on non-blocker issues which are targeted to 3.1.1: If this issue is absolutely required for 3.1.1, please upgrade priority to blocker. I'm working on 3.1.1 release now and will move this JIRAs to 3.1.2 during the week. Thanks. > DiskChecker should handle disk full situation > - > > Key: HADOOP-15493 > URL: https://issues.apache.org/jira/browse/HADOOP-15493 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Critical > Attachments: HADOOP-15493.01.patch, HADOOP-15493.02.patch > > > DiskChecker#checkDirWithDiskIo creates a file to verify that the disk is > writable. > However check should not fail when file creation fails due to disk being > full. This avoids marking full disks as _failed_. > Reported by [~kihwal] and [~daryn] in HADOOP-15450. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15529) ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in Windows
[ https://issues.apache.org/jira/browse/HADOOP-15529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508940#comment-16508940 ] Giovanni Matteo Fumarola commented on HADOOP-15529: --- This Jira blocks HADOOP-15528, since I cannot finalize the tests in Windows. > ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in > Windows > -- > > Key: HADOOP-15529 > URL: https://issues.apache.org/jira/browse/HADOOP-15529 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Attachments: HADOOP-15529.v1.patch > > > YARN-5219 introduced 2 unit tests designed for Unix. They currently failing > in Windows. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15529) ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in Windows
[ https://issues.apache.org/jira/browse/HADOOP-15529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508938#comment-16508938 ] Giovanni Matteo Fumarola commented on HADOOP-15529: --- YARN-5219 introduced 2 unit tests designed for Unix but the patch does not fence them to run within Unix system. The possible fixes are: 1) Disable the tests in Windows; or 2) Fix them to be compatible with Windows. I went with the 2nd fix. Patch attached. cc. [~leftnoteasy], [~sunilg], [~elgoiri] > ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in > Windows > -- > > Key: HADOOP-15529 > URL: https://issues.apache.org/jira/browse/HADOOP-15529 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Attachments: HADOOP-15529.v1.patch > > > YARN-5219 introduced 2 unit tests designed for Unix. They currently failing > in Windows. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15529) ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in Windows
[ https://issues.apache.org/jira/browse/HADOOP-15529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated HADOOP-15529: -- Status: Patch Available (was: Open) > ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in > Windows > -- > > Key: HADOOP-15529 > URL: https://issues.apache.org/jira/browse/HADOOP-15529 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Attachments: HADOOP-15529.v1.patch > > > YARN-5219 introduced 2 unit tests designed for Unix. They currently failing > in Windows. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15529) ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in Windows
[ https://issues.apache.org/jira/browse/HADOOP-15529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated HADOOP-15529: -- Description: YARN-5219 introduced 2 unit tests designed for Unix. They currently failing in Windows. > ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in > Windows > -- > > Key: HADOOP-15529 > URL: https://issues.apache.org/jira/browse/HADOOP-15529 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Attachments: HADOOP-15529.v1.patch > > > YARN-5219 introduced 2 unit tests designed for Unix. They currently failing > in Windows. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15529) ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in Windows
[ https://issues.apache.org/jira/browse/HADOOP-15529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated HADOOP-15529: -- Attachment: HADOOP-15529.v1.patch > ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in > Windows > -- > > Key: HADOOP-15529 > URL: https://issues.apache.org/jira/browse/HADOOP-15529 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Attachments: HADOOP-15529.v1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15529) ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in Windows
Giovanni Matteo Fumarola created HADOOP-15529: - Summary: ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in Windows Key: HADOOP-15529 URL: https://issues.apache.org/jira/browse/HADOOP-15529 Project: Hadoop Common Issue Type: Sub-task Reporter: Giovanni Matteo Fumarola Assignee: Giovanni Matteo Fumarola -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15528) Deprecate ContainerLaunch#link by using FileUtil#SymLink
[ https://issues.apache.org/jira/browse/HADOOP-15528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508912#comment-16508912 ] genericqa commented on HADOOP-15528: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HADOOP-15461 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 1s{color} | {color:green} HADOOP-15461 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} HADOOP-15461 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-15461 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} HADOOP-15461 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 31s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 52s{color} | {color:green} HADOOP-15461 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} HADOOP-15461 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 16s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 17s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 79m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.TestNodeManagerShutdown | | | hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch | | | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager | | | hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainersMonitor | | | hadoop.yarn.server.nodemanager.TestNodeManagerResync | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HADOOP-15528 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12927384/HADOOP-15528-HADOOP-15461.v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7746332dd627 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-15461 / ae9d83a | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | unit |
[jira] [Comment Edited] (HADOOP-15506) Upgrade Azure Storage Sdk version to 7.0.0 and update corresponding code blocks
[ https://issues.apache.org/jira/browse/HADOOP-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508904#comment-16508904 ] Chris Douglas edited comment on HADOOP-15506 at 6/11/18 10:49 PM: -- Backported through 2.10.0. [~esmanii], would you mind verifying the backported bits in branch-2? was (Author: chris.douglas): Backported through 2.10.0 > Upgrade Azure Storage Sdk version to 7.0.0 and update corresponding code > blocks > --- > > Key: HADOOP-15506 > URL: https://issues.apache.org/jira/browse/HADOOP-15506 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Esfandiar Manii >Assignee: Esfandiar Manii >Priority: Minor > Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.4 > > Attachments: HADOOP-15506-001.patch > > > - Upgraded Azure Storage Sdk to 7.0.0 > - Fixed code issues and couple of tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15506) Upgrade Azure Storage Sdk version to 7.0.0 and update corresponding code blocks
[ https://issues.apache.org/jira/browse/HADOOP-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508904#comment-16508904 ] Chris Douglas commented on HADOOP-15506: Backported through 2.10.0 > Upgrade Azure Storage Sdk version to 7.0.0 and update corresponding code > blocks > --- > > Key: HADOOP-15506 > URL: https://issues.apache.org/jira/browse/HADOOP-15506 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Esfandiar Manii >Assignee: Esfandiar Manii >Priority: Minor > Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.4 > > Attachments: HADOOP-15506-001.patch > > > - Upgraded Azure Storage Sdk to 7.0.0 > - Fixed code issues and couple of tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15506) Upgrade Azure Storage Sdk version to 7.0.0 and update corresponding code blocks
[ https://issues.apache.org/jira/browse/HADOOP-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-15506: --- Fix Version/s: 3.0.4 3.1.1 2.10.0 > Upgrade Azure Storage Sdk version to 7.0.0 and update corresponding code > blocks > --- > > Key: HADOOP-15506 > URL: https://issues.apache.org/jira/browse/HADOOP-15506 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Esfandiar Manii >Assignee: Esfandiar Manii >Priority: Minor > Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.4 > > Attachments: HADOOP-15506-001.patch > > > - Upgraded Azure Storage Sdk to 7.0.0 > - Fixed code issues and couple of tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15521) Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code blocks
[ https://issues.apache.org/jira/browse/HADOOP-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-15521: --- Resolution: Duplicate Status: Resolved (was: Patch Available) Closing as a dup of HADOOP-15506 > Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code > blocks > --- > > Key: HADOOP-15521 > URL: https://issues.apache.org/jira/browse/HADOOP-15521 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 2.10.0 >Reporter: Esfandiar Manii >Assignee: Esfandiar Manii >Priority: Minor > Attachments: HADOOP-15521-001.patch, HADOOP-15521-branch-2-001.patch > > > Upgraded Azure Storage Sdk to 7.0.0 > Fixed code issues and couple of tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15521) Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code blocks
[ https://issues.apache.org/jira/browse/HADOOP-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508833#comment-16508833 ] Chris Douglas commented on HADOOP-15521: No worries, just wanted to be sure [^HADOOP-15521-branch-2-001.patch] was the correct patch. I'll close this as a duplicate and backport HADOOP-15506. > Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code > blocks > --- > > Key: HADOOP-15521 > URL: https://issues.apache.org/jira/browse/HADOOP-15521 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 2.10.0 >Reporter: Esfandiar Manii >Assignee: Esfandiar Manii >Priority: Minor > Attachments: HADOOP-15521-001.patch, HADOOP-15521-branch-2-001.patch > > > Upgraded Azure Storage Sdk to 7.0.0 > Fixed code issues and couple of tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15521) Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code blocks
[ https://issues.apache.org/jira/browse/HADOOP-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508813#comment-16508813 ] Esfandiar Manii commented on HADOOP-15521: -- After I synced offline with Thomas I figured that I dont need to create a separate JIRA for the back porting. The original change is linked to this JIRA (HADOOP-15506 Upgrade Azure Storage Sdk version to 7.0.0 and update corresponding code blocks). Here both of the patches are identical and targeting branch-2 but I had to add the branch name to have the tests run against branch2. Sorry for the confusion. > Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code > blocks > --- > > Key: HADOOP-15521 > URL: https://issues.apache.org/jira/browse/HADOOP-15521 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 2.10.0 >Reporter: Esfandiar Manii >Assignee: Esfandiar Manii >Priority: Minor > Attachments: HADOOP-15521-001.patch, HADOOP-15521-branch-2-001.patch > > > Upgraded Azure Storage Sdk to 7.0.0 > Fixed code issues and couple of tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15521) Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code blocks
[ https://issues.apache.org/jira/browse/HADOOP-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508795#comment-16508795 ] Chris Douglas commented on HADOOP-15521: Both versions of the patch look identical, to each other and to HADOOP-15506 (+/- minor whitespace). Am I missing something, or is the patch missing some changes? > Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code > blocks > --- > > Key: HADOOP-15521 > URL: https://issues.apache.org/jira/browse/HADOOP-15521 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 2.10.0 >Reporter: Esfandiar Manii >Assignee: Esfandiar Manii >Priority: Minor > Attachments: HADOOP-15521-001.patch, HADOOP-15521-branch-2-001.patch > > > Upgraded Azure Storage Sdk to 7.0.0 > Fixed code issues and couple of tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15528) Deprecate ContainerLaunch#link by using FileUtil#SymLink
[ https://issues.apache.org/jira/browse/HADOOP-15528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15528: - Status: Patch Available (was: Open) > Deprecate ContainerLaunch#link by using FileUtil#SymLink > > > Key: HADOOP-15528 > URL: https://issues.apache.org/jira/browse/HADOOP-15528 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Major > Attachments: HADOOP-15528-HADOOP-15461.v1.patch > > > {{ContainerLaunch}} currently uses its own utility to create links (including > winutils). > This should be deprecated and rely on {{FileUtil#SymLink}} which is already > multi-platform and pure Java. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15528) Deprecate ContainerLaunch#link by using FileUtil#SymLink
[ https://issues.apache.org/jira/browse/HADOOP-15528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508786#comment-16508786 ] Íñigo Goiri commented on HADOOP-15528: -- Thanks [~giovanni.fumarola] for the patch. I'm not sure if {{ContainerLaunch}} can be considered as public or not. I would mark the link method as deprecated instead of removing it. For the @SuppressWarnings("unchecked"), I think we should do that in a separate JIRA; probably in trunk We need to look for the source of this supresswarning and why is not needed anymore. > Deprecate ContainerLaunch#link by using FileUtil#SymLink > > > Key: HADOOP-15528 > URL: https://issues.apache.org/jira/browse/HADOOP-15528 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Major > Attachments: HADOOP-15528-HADOOP-15461.v1.patch > > > {{ContainerLaunch}} currently uses its own utility to create links (including > winutils). > This should be deprecated and rely on {{FileUtil#SymLink}} which is already > multi-platform and pure Java. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15528) Deprecate ContainerLaunch#link by using FileUtil#SymLink
[ https://issues.apache.org/jira/browse/HADOOP-15528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15528: - Description: {{ContainerLaunch}} currently uses its own utility to create links (including winutils). This should be deprecated and rely on {{FileUtil#SymLink}} which is already multi-platform and pure Java. > Deprecate ContainerLaunch#link by using FileUtil#SymLink > > > Key: HADOOP-15528 > URL: https://issues.apache.org/jira/browse/HADOOP-15528 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Major > Attachments: HADOOP-15528-HADOOP-15461.v1.patch > > > {{ContainerLaunch}} currently uses its own utility to create links (including > winutils). > This should be deprecated and rely on {{FileUtil#SymLink}} which is already > multi-platform and pure Java. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15528) Deprecate ContainerLaunch#link by using FileUtil#SymLink
[ https://issues.apache.org/jira/browse/HADOOP-15528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated HADOOP-15528: -- Attachment: HADOOP-15528-HADOOP-15461.v1.patch > Deprecate ContainerLaunch#link by using FileUtil#SymLink > > > Key: HADOOP-15528 > URL: https://issues.apache.org/jira/browse/HADOOP-15528 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Major > Attachments: HADOOP-15528-HADOOP-15461.v1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15528) Deprecate ContainerLaunch#link by using FileUtil#SymLink
Giovanni Matteo Fumarola created HADOOP-15528: - Summary: Deprecate ContainerLaunch#link by using FileUtil#SymLink Key: HADOOP-15528 URL: https://issues.apache.org/jira/browse/HADOOP-15528 Project: Hadoop Common Issue Type: Sub-task Reporter: Giovanni Matteo Fumarola Assignee: Giovanni Matteo Fumarola -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508708#comment-16508708 ] genericqa commented on HADOOP-14445: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 36s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 27m 36s{color} | {color:red} root generated 1 new + 1487 unchanged - 0 fixed = 1488 total (was 1487) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 0s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 24s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 8s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}140m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.conf.TestCommonConfigurationFields | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HADOOP-14445 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12927022/HADOOP-14445.14.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux f5f6b3a23c85 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision
[jira] [Commented] (HADOOP-15527) Sometimes daemons keep running even after "kill -9" from daemon-stop script
[ https://issues.apache.org/jira/browse/HADOOP-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508695#comment-16508695 ] Vinod Kumar Vavilapalli commented on HADOOP-15527: -- Here is the info from some debug logs I added to hadoop/libexec/hadoop-functions.sh and after adding a while loop around the "ps" check. {code} === 2018-06-10 00:43:31,754 vinodkv inside scripts sending SIGTERM === 2018-06-10 00:43:31,756 vinodkv inside scripts SIGTERM sent, sleeping === 2018-06-10 00:43:36,759 vinodkv inside scripts 3989960 still alive! sending sig-kill === 2018-06-10 00:43:36,797 vinodkv inside scripts sigkill sent === 2018-06-10 00:43:36,827 vinodkv inside scripts.. unable to kill 3989960 === 2018-06-10 00:43:36,846 vinodkv inside scripts.. unable to kill 3989960 === 2018-06-10 00:43:36,866 vinodkv inside scripts.. unable to kill 3989960 === 2018-06-10 00:43:36,885 vinodkv inside scripts.. unable to kill 3989960 === 2018-06-10 00:43:36,904 vinodkv inside scripts.. unable to kill 3989960 === 2018-06-10 00:43:36,924 vinodkv inside scripts.. process 3989960 finally dead {code} {code} === 2018-06-10 00:48:00,884 vinodkv inside scripts sending SIGTERM === 2018-06-10 00:48:00,886 vinodkv inside scripts SIGTERM sent, sleeping === 2018-06-10 00:48:05,890 vinodkv inside scripts 3992747 still alive! sending sig-kill === 2018-06-10 00:48:05,898 vinodkv inside scripts sigkill sent === 2018-06-10 00:48:05,921 vinodkv inside scripts.. unable to kill 3992747 === 2018-06-10 00:48:05,938 vinodkv inside scripts.. unable to kill 3992747 === 2018-06-10 00:48:05,953 vinodkv inside scripts.. unable to kill 3992747 === 2018-06-10 00:48:05,970 vinodkv inside scripts.. unable to kill 3992747 === 2018-06-10 00:48:05,987 vinodkv inside scripts.. unable to kill 3992747 === 2018-06-10 00:48:06,006 vinodkv inside scripts.. unable to kill 3992747 === 2018-06-10 00:48:06,024 vinodkv inside scripts.. unable to kill 3992747 === 2018-06-10 00:48:06,042 vinodkv inside scripts.. process 3992747 finally dead {code} It takes roughly 125-145 milliseconds for RM to come down once a "kill -9" is sent. It is possible that it may be due to system load. I don't have any other explanation as to why this is only happening now. > Sometimes daemons keep running even after "kill -9" from daemon-stop script > --- > > Key: HADOOP-15527 > URL: https://issues.apache.org/jira/browse/HADOOP-15527 > Project: Hadoop Common > Issue Type: Bug >Reporter: Vinod Kumar Vavilapalli >Assignee: Vinod Kumar Vavilapalli >Priority: Major > > I'm seeing that sometimes daemons keep running for a little while even after > "kill -9" from daemon-stop scripts. > Debugging more, I see several instances of "ERROR: Unable to kill ${pid}". > Saw this specifically with ResourceManager & NodeManager - {{yarn --daemon > stop nodemanager}}. Though it is possible that other daemons may run into > this too. > Saw this on both Centos as well as Ubuntu. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15527) Sometimes daemons keep running even after "kill -9" from daemon-stop script
Vinod Kumar Vavilapalli created HADOOP-15527: Summary: Sometimes daemons keep running even after "kill -9" from daemon-stop script Key: HADOOP-15527 URL: https://issues.apache.org/jira/browse/HADOOP-15527 Project: Hadoop Common Issue Type: Bug Reporter: Vinod Kumar Vavilapalli Assignee: Vinod Kumar Vavilapalli I'm seeing that sometimes daemons keep running for a little while even after "kill -9" from daemon-stop scripts. Debugging more, I see several instances of "ERROR: Unable to kill ${pid}". Saw this specifically with ResourceManager & NodeManager - {{yarn --daemon stop nodemanager}}. Though it is possible that other daemons may run into this too. Saw this on both Centos as well as Ubuntu. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15483) Upgrade jquery to version 3.3.1
[ https://issues.apache.org/jira/browse/HADOOP-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508686#comment-16508686 ] Jitendra Nath Pandey commented on HADOOP-15483: --- [~sunilg], [~msingh], one question. This patch adds bootstrap-3.3.7, however removes only a few files from bootstrap-3.3.7. Why don't we remove bootstrap-3.0.2 altogether? I don't see it being used anywhere. > Upgrade jquery to version 3.3.1 > --- > > Key: HADOOP-15483 > URL: https://issues.apache.org/jira/browse/HADOOP-15483 > Project: Hadoop Common > Issue Type: Task >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HADOOP-15483.001.patch, HADOOP-15483.002.patch, > HADOOP-15483.003.patch, HADOOP-15483.004.patch, HADOOP-15483.005.patch, > HADOOP-15483.006.patch, HADOOP-15483.007.patch > > > This Jira aims to upgrade jquery to version 3.3.1. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code
[ https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15522: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HADOOP-15461 Status: Resolved (was: Patch Available) Thanks [~giovanni.fumarola] for the patch. Committed to HADOOP-15461. > Deprecate Shell#ReadLink by using native java code > -- > > Key: HADOOP-15522 > URL: https://issues.apache.org/jira/browse/HADOOP-15522 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Major > Fix For: HADOOP-15461 > > Attachments: HADOOP-15522-HADOOP-15461.v1.patch, > HADOOP-15522-HADOOP-15461.v2.patch > > > Hadoop uses the shell to read symbolic links. Now that Hadoop relies on Java > 7+, we can deprecate all the shell code and rely on the Java APIs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code
[ https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508654#comment-16508654 ] Íñigo Goiri commented on HADOOP-15522: -- The unit tests passed [here|https://builds.apache.org/job/PreCommit-HADOOP-Build/14751/testReport/org.apache.hadoop.fs/TestFileUtil/]. +1 on [^HADOOP-15522-HADOOP-15461.v2.patch]. I think we can go ahead and commit to the branch. > Deprecate Shell#ReadLink by using native java code > -- > > Key: HADOOP-15522 > URL: https://issues.apache.org/jira/browse/HADOOP-15522 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Major > Attachments: HADOOP-15522-HADOOP-15461.v1.patch, > HADOOP-15522-HADOOP-15461.v2.patch > > > Hadoop uses the shell to read symbolic links. Now that Hadoop relies on Java > 7+, we can deprecate all the shell code and rely on the Java APIs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code
[ https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508623#comment-16508623 ] genericqa commented on HADOOP-15522: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HADOOP-15461 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 36m 33s{color} | {color:green} HADOOP-15461 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 35m 13s{color} | {color:green} HADOOP-15461 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} HADOOP-15461 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} HADOOP-15461 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} HADOOP-15461 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} HADOOP-15461 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 51s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 31s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 44s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}148m 0s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HADOOP-15522 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12927338/HADOOP-15522-HADOOP-15461.v2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ad4d20b20d19 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-15461 / b59400d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14751/testReport/ | | Max. process+thread count | 1514 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14751/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This
[jira] [Commented] (HADOOP-15504) Upgrade Maven and Maven Wagon versions
[ https://issues.apache.org/jira/browse/HADOOP-15504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508605#comment-16508605 ] Akira Ajisaka commented on HADOOP-15504: LGTM, +1 pending Jenkins. > Upgrade Maven and Maven Wagon versions > -- > > Key: HADOOP-15504 > URL: https://issues.apache.org/jira/browse/HADOOP-15504 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Sean Mackrory >Assignee: Sean Mackrory >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15504.001.patch, HADOOP-15504.002.patch > > > I'm not even sure that Hadoop's combination of the relevant dependencies is > vulnerable (even if they are, this is a relatively minor vulnerability), but > this is at least showing up as an issue in automated vulnerability scans. > Details can be found here [https://maven.apache.org/security.html] > (CVE-2013-0253, CVE-2012-6153). Essentially the combination of maven 3.0.4 > (we use 3.0, and I guess that maps to 3.0.4?) and older versions of wagon > plugin don't use SSL properly (note that we neither use the WebDAV provider > nor a 2.x version of the SSH plugin, which is why I suspect that the > vulnerability does not affect Hadoop). > I know some dependencies can be especially troublesome to upgrade - I suspect > that Maven's critical role in our build might make this risky - so if anyone > has ideas for how to more completely test this than a full build, please > chime in, -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15506) Upgrade Azure Storage Sdk version to 7.0.0 and update corresponding code blocks
[ https://issues.apache.org/jira/browse/HADOOP-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508590#comment-16508590 ] Chris Douglas commented on HADOOP-15506: Marking as resolved, since this was committed to trunk. > Upgrade Azure Storage Sdk version to 7.0.0 and update corresponding code > blocks > --- > > Key: HADOOP-15506 > URL: https://issues.apache.org/jira/browse/HADOOP-15506 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Esfandiar Manii >Assignee: Esfandiar Manii >Priority: Minor > Fix For: 3.2.0 > > Attachments: HADOOP-15506-001.patch > > > - Upgraded Azure Storage Sdk to 7.0.0 > - Fixed code issues and couple of tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15506) Upgrade Azure Storage Sdk version to 7.0.0 and update corresponding code blocks
[ https://issues.apache.org/jira/browse/HADOOP-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-15506: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.0 Status: Resolved (was: Patch Available) > Upgrade Azure Storage Sdk version to 7.0.0 and update corresponding code > blocks > --- > > Key: HADOOP-15506 > URL: https://issues.apache.org/jira/browse/HADOOP-15506 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Esfandiar Manii >Assignee: Esfandiar Manii >Priority: Minor > Fix For: 3.2.0 > > Attachments: HADOOP-15506-001.patch > > > - Upgraded Azure Storage Sdk to 7.0.0 > - Fixed code issues and couple of tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15504) Upgrade Maven and Maven Wagon versions
[ https://issues.apache.org/jira/browse/HADOOP-15504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508525#comment-16508525 ] Sean Mackrory commented on HADOOP-15504: Thanks [~ajisakaa] - good catch. I removed that exclusion and double-checked the others were each individually necessary. > Upgrade Maven and Maven Wagon versions > -- > > Key: HADOOP-15504 > URL: https://issues.apache.org/jira/browse/HADOOP-15504 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Sean Mackrory >Assignee: Sean Mackrory >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15504.001.patch, HADOOP-15504.002.patch > > > I'm not even sure that Hadoop's combination of the relevant dependencies is > vulnerable (even if they are, this is a relatively minor vulnerability), but > this is at least showing up as an issue in automated vulnerability scans. > Details can be found here [https://maven.apache.org/security.html] > (CVE-2013-0253, CVE-2012-6153). Essentially the combination of maven 3.0.4 > (we use 3.0, and I guess that maps to 3.0.4?) and older versions of wagon > plugin don't use SSL properly (note that we neither use the WebDAV provider > nor a 2.x version of the SSH plugin, which is why I suspect that the > vulnerability does not affect Hadoop). > I know some dependencies can be especially troublesome to upgrade - I suspect > that Maven's critical role in our build might make this risky - so if anyone > has ideas for how to more completely test this than a full build, please > chime in, -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15504) Upgrade Maven and Maven Wagon versions
[ https://issues.apache.org/jira/browse/HADOOP-15504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HADOOP-15504: --- Attachment: HADOOP-15504.002.patch > Upgrade Maven and Maven Wagon versions > -- > > Key: HADOOP-15504 > URL: https://issues.apache.org/jira/browse/HADOOP-15504 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Sean Mackrory >Assignee: Sean Mackrory >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15504.001.patch, HADOOP-15504.002.patch > > > I'm not even sure that Hadoop's combination of the relevant dependencies is > vulnerable (even if they are, this is a relatively minor vulnerability), but > this is at least showing up as an issue in automated vulnerability scans. > Details can be found here [https://maven.apache.org/security.html] > (CVE-2013-0253, CVE-2012-6153). Essentially the combination of maven 3.0.4 > (we use 3.0, and I guess that maps to 3.0.4?) and older versions of wagon > plugin don't use SSL properly (note that we neither use the WebDAV provider > nor a 2.x version of the SSH plugin, which is why I suspect that the > vulnerability does not affect Hadoop). > I know some dependencies can be especially troublesome to upgrade - I suspect > that Maven's critical role in our build might make this risky - so if anyone > has ideas for how to more completely test this than a full build, please > chime in, -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508483#comment-16508483 ] Rushabh S Shah commented on HADOOP-14445: - Thanks [~xiaochen] for revised patch. I will review this week. > Delegation tokens are not shared between KMS instances > -- > > Key: HADOOP-14445 > URL: https://issues.apache.org/jira/browse/HADOOP-14445 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.8.0, 3.0.0-alpha1 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-14445-branch-2.8.002.patch, > HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, > HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, > HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, > HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, > HADOOP-14445.12.patch, HADOOP-14445.13.patch, HADOOP-14445.14.patch, > HADOOP-14445.branch-2.000.precommit.patch, > HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, > HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, > HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, > HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, > HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, > HADOOP-14445.branch-2.8.006.patch, HADOOP-14445.branch-2.8.revert.patch, > HADOOP-14445.revert.patch > > > As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do > not share delegation tokens. (a client uses KMS address/port as the key for > delegation token) > {code:title=DelegationTokenAuthenticatedURL#openConnection} > if (!creds.getAllTokens().isEmpty()) { > InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(), > url.getPort()); > Text service = SecurityUtil.buildTokenService(serviceAddr); > dToken = creds.getToken(service); > {code} > But KMS doc states: > {quote} > Delegation Tokens > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation > tokens too. > Under HA, A KMS instance must verify the delegation token given by another > KMS instance, by checking the shared secret used to sign the delegation > token. To do this, all KMS instances must be able to retrieve the shared > secret from ZooKeeper. > {quote} > We should either update the KMS documentation, or fix this code to share > delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-14445: --- Status: Patch Available (was: Open) > Delegation tokens are not shared between KMS instances > -- > > Key: HADOOP-14445 > URL: https://issues.apache.org/jira/browse/HADOOP-14445 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 3.0.0-alpha1, 2.8.0 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-14445-branch-2.8.002.patch, > HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, > HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, > HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, > HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, > HADOOP-14445.12.patch, HADOOP-14445.13.patch, HADOOP-14445.14.patch, > HADOOP-14445.branch-2.000.precommit.patch, > HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, > HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, > HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, > HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, > HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, > HADOOP-14445.branch-2.8.006.patch, HADOOP-14445.branch-2.8.revert.patch, > HADOOP-14445.revert.patch > > > As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do > not share delegation tokens. (a client uses KMS address/port as the key for > delegation token) > {code:title=DelegationTokenAuthenticatedURL#openConnection} > if (!creds.getAllTokens().isEmpty()) { > InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(), > url.getPort()); > Text service = SecurityUtil.buildTokenService(serviceAddr); > dToken = creds.getToken(service); > {code} > But KMS doc states: > {quote} > Delegation Tokens > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation > tokens too. > Under HA, A KMS instance must verify the delegation token given by another > KMS instance, by checking the shared secret used to sign the delegation > token. To do this, all KMS instances must be able to retrieve the shared > secret from ZooKeeper. > {quote} > We should either update the KMS documentation, or fix this code to share > delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15483) Upgrade jquery to version 3.3.1
[ https://issues.apache.org/jira/browse/HADOOP-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508473#comment-16508473 ] genericqa commented on HADOOP-15483: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 56s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 30m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . hadoop-ozone hadoop-ozone/acceptance-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 38s{color} | {color:red} hadoop-ozone/ozone-manager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 6s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 5861 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 46s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . hadoop-ozone hadoop-ozone/acceptance-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}168m 47s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 53s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}374m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | | | hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun | | | hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema | | | hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities | | |
[jira] [Commented] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code
[ https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508461#comment-16508461 ] Íñigo Goiri commented on HADOOP-15522: -- Given that we are logging the error, I think it's safe to catch all the exceptions as in [^HADOOP-15522-HADOOP-15461.v2.patch]. Let's see what Yetus comes back with (we need to check the new unit tests from HADOOP-15516) but this LGTM. . > Deprecate Shell#ReadLink by using native java code > -- > > Key: HADOOP-15522 > URL: https://issues.apache.org/jira/browse/HADOOP-15522 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Major > Attachments: HADOOP-15522-HADOOP-15461.v1.patch, > HADOOP-15522-HADOOP-15461.v2.patch > > > Hadoop uses the shell to read symbolic links. Now that Hadoop relies on Java > 7+, we can deprecate all the shell code and rely on the Java APIs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508415#comment-16508415 ] Akira Ajisaka commented on HADOOP-14178: Thanks [~boky01] for reviewing the patch! 018 patch: * Reflected Andras's comment * Use Mockito 2 in hadoop-hdds and hadoop-ozone > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka >Priority: Major > Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, > HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, > HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, > HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, > HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, > HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, > HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, > HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, > HADOOP-14178.016.patch, HADOOP-14178.017.patch, HADOOP-14178.018.patch > > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14178: --- Attachment: HADOOP-14178.018.patch > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka >Priority: Major > Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, > HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, > HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, > HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, > HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, > HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, > HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, > HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, > HADOOP-14178.016.patch, HADOOP-14178.017.patch, HADOOP-14178.018.patch > > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code
[ https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508357#comment-16508357 ] Giovanni Matteo Fumarola edited comment on HADOOP-15522 at 6/11/18 5:12 PM: Thanks [~elgoiri] for the review, ReadSymbolic can only throw those exceptions, we should catch only those and throw the others. I updated the patch [^HADOOP-15522-HADOOP-15461.v2.patch] with your feedback. was (Author: giovanni.fumarola): Thanks [~elgoiri] for the review, ReadSymbolic can only throw those exceptions, we should catch only those and throw the others. > Deprecate Shell#ReadLink by using native java code > -- > > Key: HADOOP-15522 > URL: https://issues.apache.org/jira/browse/HADOOP-15522 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Major > Attachments: HADOOP-15522-HADOOP-15461.v1.patch, > HADOOP-15522-HADOOP-15461.v2.patch > > > Hadoop uses the shell to read symbolic links. Now that Hadoop relies on Java > 7+, we can deprecate all the shell code and rely on the Java APIs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code
[ https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated HADOOP-15522: -- Attachment: HADOOP-15522-HADOOP-15461.v2.patch > Deprecate Shell#ReadLink by using native java code > -- > > Key: HADOOP-15522 > URL: https://issues.apache.org/jira/browse/HADOOP-15522 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Major > Attachments: HADOOP-15522-HADOOP-15461.v1.patch, > HADOOP-15522-HADOOP-15461.v2.patch > > > Hadoop uses the shell to read symbolic links. Now that Hadoop relies on Java > 7+, we can deprecate all the shell code and rely on the Java APIs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code
[ https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508357#comment-16508357 ] Giovanni Matteo Fumarola commented on HADOOP-15522: --- Thanks [~elgoiri] for the review, ReadSymbolic can only throw those exceptions, we should catch only those and throw the others. > Deprecate Shell#ReadLink by using native java code > -- > > Key: HADOOP-15522 > URL: https://issues.apache.org/jira/browse/HADOOP-15522 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Major > Attachments: HADOOP-15522-HADOOP-15461.v1.patch > > > Hadoop uses the shell to read symbolic links. Now that Hadoop relies on Java > 7+, we can deprecate all the shell code and rely on the Java APIs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15483) Upgrade jquery to version 3.3.1
[ https://issues.apache.org/jira/browse/HADOOP-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508352#comment-16508352 ] Jitendra Nath Pandey commented on HADOOP-15483: --- Since both HDFS and Yarn UI are verified with this change, I am inclined to commit it. +1 > Upgrade jquery to version 3.3.1 > --- > > Key: HADOOP-15483 > URL: https://issues.apache.org/jira/browse/HADOOP-15483 > Project: Hadoop Common > Issue Type: Task >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HADOOP-15483.001.patch, HADOOP-15483.002.patch, > HADOOP-15483.003.patch, HADOOP-15483.004.patch, HADOOP-15483.005.patch, > HADOOP-15483.006.patch, HADOOP-15483.007.patch > > > This Jira aims to upgrade jquery to version 3.3.1. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15483) Upgrade jquery to version 3.3.1
[ https://issues.apache.org/jira/browse/HADOOP-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508351#comment-16508351 ] Jitendra Nath Pandey commented on HADOOP-15483: --- It seems the patch in this Jira is a super set of HADOOP-15484. > Upgrade jquery to version 3.3.1 > --- > > Key: HADOOP-15483 > URL: https://issues.apache.org/jira/browse/HADOOP-15483 > Project: Hadoop Common > Issue Type: Task >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HADOOP-15483.001.patch, HADOOP-15483.002.patch, > HADOOP-15483.003.patch, HADOOP-15483.004.patch, HADOOP-15483.005.patch, > HADOOP-15483.006.patch, HADOOP-15483.007.patch > > > This Jira aims to upgrade jquery to version 3.3.1. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code
[ https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508349#comment-16508349 ] Íñigo Goiri commented on HADOOP-15522: -- In [^HADOOP-15522-HADOOP-15461.v1.patch], can we just do catch Exception instead of doing it for each of them? > Deprecate Shell#ReadLink by using native java code > -- > > Key: HADOOP-15522 > URL: https://issues.apache.org/jira/browse/HADOOP-15522 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Major > Attachments: HADOOP-15522-HADOOP-15461.v1.patch > > > Hadoop uses the shell to read symbolic links. Now that Hadoop relies on Java > 7+, we can deprecate all the shell code and rely on the Java APIs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated
[ https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508318#comment-16508318 ] Eric Yang commented on HADOOP-15518: [~sunilg] Multiple AuthenticationFilters configured with different service principal names is a corner case that shouldn't exist but the code is allowing this to happen. Comments in this JIRA and YARN-8108 should explain why this is unsupported use case. The casting problem is using the same HTTP principal and YARN code is activating multiple filters that based on AuthenticationFilter. Token casting issue didn't exist prior to this patch. This patch is making assumption that filters based on AuthenticationFilter would make compatible tokens, which RMAuthenticationFilter and AuthenticationFilter don't make the same type of token. Thus, the casting problem occurs. This problem can be eliminated by applying same type of AuthenticationFilter on a server port. YARN-8108 can fix YARN resource manager. There might be other places in Hadoop that might have similar problems, like KMSAuthenticationFilter and DelegationTokenAuthenticationFilter that need to be reviewed to understand the impact of this change. > Authentication filter calling handler after request already authenticated > - > > Key: HADOOP-15518 > URL: https://issues.apache.org/jira/browse/HADOOP-15518 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.7.1 >Reporter: Kevin Minder >Assignee: Kevin Minder >Priority: Major > Attachments: HADOOP-15518-001.patch > > > The hadoop-auth AuthenticationFilter will invoke its handler even if a prior > successful authentication has occurred in the current request. This > primarily affects situations where multiple authentication mechanism has been > configured. For example when core-site.xml's has > hadoop.http.authentication.type=kerberos and yarn-site.xml has > yarn.timeline-service.http-authentication.type=kerberos the result is an > attempt to perform two Kerberos authentications for the same request. This > in turn results in Kerberos triggering a replay attack detection. The > javadocs for AuthenticationHandler > ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)] > indicate for the authenticate method that > {quote}This method is invoked by the AuthenticationFilter only if the HTTP > client request is not yet authenticated. > {quote} > This does not appear to be the case in practice. > I've create a patch and tested on a limited number of functional use cases > (e.g. the timeline-service issue noted above). If there is general agreement > that the change is valid I'll add unit tests to the patch. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated
[ https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16507946#comment-16507946 ] Sunil Govindan commented on HADOOP-15518: - [~eyang] This latest issue (Problem accessing /proxy/application_1528498597648_0001/) is a different problem, correct? bq.If multiple AuthenticationFilters are configured, and service principal names are different Are you mentioning a case where AuthenticationFilter is added multiple time (like Spnego Filter and authentication) but will use different keytab? > Authentication filter calling handler after request already authenticated > - > > Key: HADOOP-15518 > URL: https://issues.apache.org/jira/browse/HADOOP-15518 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.7.1 >Reporter: Kevin Minder >Assignee: Kevin Minder >Priority: Major > Attachments: HADOOP-15518-001.patch > > > The hadoop-auth AuthenticationFilter will invoke its handler even if a prior > successful authentication has occurred in the current request. This > primarily affects situations where multiple authentication mechanism has been > configured. For example when core-site.xml's has > hadoop.http.authentication.type=kerberos and yarn-site.xml has > yarn.timeline-service.http-authentication.type=kerberos the result is an > attempt to perform two Kerberos authentications for the same request. This > in turn results in Kerberos triggering a replay attack detection. The > javadocs for AuthenticationHandler > ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)] > indicate for the authenticate method that > {quote}This method is invoked by the AuthenticationFilter only if the HTTP > client request is not yet authenticated. > {quote} > This does not appear to be the case in practice. > I've create a patch and tested on a limited number of functional use cases > (e.g. the timeline-service issue noted above). If there is general agreement > that the change is valid I'll add unit tests to the patch. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15483) Upgrade jquery to version 3.3.1
[ https://issues.apache.org/jira/browse/HADOOP-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16507942#comment-16507942 ] Sunil Govindan commented on HADOOP-15483: - A quick fix for the above issue mentioned by [~rohithsharma] Attached v7 patch. Other RM pages seems fine. > Upgrade jquery to version 3.3.1 > --- > > Key: HADOOP-15483 > URL: https://issues.apache.org/jira/browse/HADOOP-15483 > Project: Hadoop Common > Issue Type: Task >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HADOOP-15483.001.patch, HADOOP-15483.002.patch, > HADOOP-15483.003.patch, HADOOP-15483.004.patch, HADOOP-15483.005.patch, > HADOOP-15483.006.patch, HADOOP-15483.007.patch > > > This Jira aims to upgrade jquery to version 3.3.1. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15483) Upgrade jquery to version 3.3.1
[ https://issues.apache.org/jira/browse/HADOOP-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil Govindan updated HADOOP-15483: Attachment: HADOOP-15483.007.patch > Upgrade jquery to version 3.3.1 > --- > > Key: HADOOP-15483 > URL: https://issues.apache.org/jira/browse/HADOOP-15483 > Project: Hadoop Common > Issue Type: Task >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HADOOP-15483.001.patch, HADOOP-15483.002.patch, > HADOOP-15483.003.patch, HADOOP-15483.004.patch, HADOOP-15483.005.patch, > HADOOP-15483.006.patch, HADOOP-15483.007.patch > > > This Jira aims to upgrade jquery to version 3.3.1. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15499) Performance severe drop when running RawErasureCoderBenchmark with NativeRSRawErasureCoder
[ https://issues.apache.org/jira/browse/HADOOP-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16507715#comment-16507715 ] Hudson commented on HADOOP-15499: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14397 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14397/]) HADOOP-15499. Performance severe drops when running (sammi.chen: rev 18201b882a38ad875358c5d23c09b0ef903c2f91) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/NativeXORRawEncoder.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderBenchmark.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawEncoder.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/NativeRSRawEncoder.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/NativeRSRawDecoder.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/NativeXORRawDecoder.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java > Performance severe drop when running RawErasureCoderBenchmark with > NativeRSRawErasureCoder > -- > > Key: HADOOP-15499 > URL: https://issues.apache.org/jira/browse/HADOOP-15499 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0, 3.0.1, 3.0.2, 3.1.1 >Reporter: SammiChen >Assignee: SammiChen >Priority: Major > Fix For: 3.2.0, 3.1.1, 3.0.4 > > Attachments: HADOOP-15499.001.patch, HADOOP-15499.002.patch > > > Run RawErasureCoderBenchmark which is a micro-benchmark to test EC codec > encoding/decoding performance. > 50 concurrency Native ISA-L coder has the less throughput than 1 concurrency > Native ISA-L case. It's abnormal. > > bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 1 > 1024 1024 > Using 126MB buffer. > ISA-L coder encode 1008MB data, with chunk size 1024KB > Total time: 0.19 s. > Total throughput: 5390.37 MB/s > Threads statistics: > 1 threads in total. > Min: 0.18 s, Max: 0.18 s, Avg: 0.18 s, 90th Percentile: 0.18 s. > > bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 > 50 1024 10240 > Using 120MB buffer. > ISA-L coder encode 54000MB data, with chunk size 10240KB > Total time: 11.58 s. > Total throughput: 4662 MB/s > Threads statistics: > 50 threads in total. > Min: 0.55 s, Max: 11.5 s, Avg: 6.32 s, 90th Percentile: 10.45 s. > > RawErasureCoderBenchmark shares a single coder between all concurrent > threads. While > NativeRSRawEncoder and NativeRSRawDecoder has synchronized key work on > doDecode and doEncode function. So 50 concurrent threads are forced to use > the shared coder encode/decode function one by one. > > To resolve the issue, there are two approaches. > # Refactor RawErasureCoderBenchmark to use dedicated coder for each > concurrent thread. > # Refactor NativeRSRawEncoder and NativeRSRawDecoder to get better > concurrency. Since the synchronized key work is to try to protect the > private variable nativeCoder from being checked in doEncode/doDecode and > being modified in release. We can use reentrantReadWriteLock to increase the > concurrency since doEncode/doDecode can be called multiple times without > change the nativeCoder state. > I prefer approach 2 and will upload a patch later. > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15499) Performance severe drop when running RawErasureCoderBenchmark with NativeRSRawErasureCoder
[ https://issues.apache.org/jira/browse/HADOOP-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-15499: --- Fix Version/s: 3.0.4 3.1.1 3.2.0 > Performance severe drop when running RawErasureCoderBenchmark with > NativeRSRawErasureCoder > -- > > Key: HADOOP-15499 > URL: https://issues.apache.org/jira/browse/HADOOP-15499 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0, 3.0.1, 3.0.2, 3.1.1 >Reporter: SammiChen >Assignee: SammiChen >Priority: Major > Fix For: 3.2.0, 3.1.1, 3.0.4 > > Attachments: HADOOP-15499.001.patch, HADOOP-15499.002.patch > > > Run RawErasureCoderBenchmark which is a micro-benchmark to test EC codec > encoding/decoding performance. > 50 concurrency Native ISA-L coder has the less throughput than 1 concurrency > Native ISA-L case. It's abnormal. > > bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 1 > 1024 1024 > Using 126MB buffer. > ISA-L coder encode 1008MB data, with chunk size 1024KB > Total time: 0.19 s. > Total throughput: 5390.37 MB/s > Threads statistics: > 1 threads in total. > Min: 0.18 s, Max: 0.18 s, Avg: 0.18 s, 90th Percentile: 0.18 s. > > bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 > 50 1024 10240 > Using 120MB buffer. > ISA-L coder encode 54000MB data, with chunk size 10240KB > Total time: 11.58 s. > Total throughput: 4662 MB/s > Threads statistics: > 50 threads in total. > Min: 0.55 s, Max: 11.5 s, Avg: 6.32 s, 90th Percentile: 10.45 s. > > RawErasureCoderBenchmark shares a single coder between all concurrent > threads. While > NativeRSRawEncoder and NativeRSRawDecoder has synchronized key work on > doDecode and doEncode function. So 50 concurrent threads are forced to use > the shared coder encode/decode function one by one. > > To resolve the issue, there are two approaches. > # Refactor RawErasureCoderBenchmark to use dedicated coder for each > concurrent thread. > # Refactor NativeRSRawEncoder and NativeRSRawDecoder to get better > concurrency. Since the synchronized key work is to try to protect the > private variable nativeCoder from being checked in doEncode/doDecode and > being modified in release. We can use reentrantReadWriteLock to increase the > concurrency since doEncode/doDecode can be called multiple times without > change the nativeCoder state. > I prefer approach 2 and will upload a patch later. > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15499) Performance severe drop when running RawErasureCoderBenchmark with NativeRSRawErasureCoder
[ https://issues.apache.org/jira/browse/HADOOP-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16507710#comment-16507710 ] SammiChen commented on HADOOP-15499: Thanks [~xiaochen] for the review. Committed to trunk, branch-3.0 & branch-3.1. > Performance severe drop when running RawErasureCoderBenchmark with > NativeRSRawErasureCoder > -- > > Key: HADOOP-15499 > URL: https://issues.apache.org/jira/browse/HADOOP-15499 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0, 3.0.1, 3.0.2, 3.1.1 >Reporter: SammiChen >Assignee: SammiChen >Priority: Major > Attachments: HADOOP-15499.001.patch, HADOOP-15499.002.patch > > > Run RawErasureCoderBenchmark which is a micro-benchmark to test EC codec > encoding/decoding performance. > 50 concurrency Native ISA-L coder has the less throughput than 1 concurrency > Native ISA-L case. It's abnormal. > > bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 1 > 1024 1024 > Using 126MB buffer. > ISA-L coder encode 1008MB data, with chunk size 1024KB > Total time: 0.19 s. > Total throughput: 5390.37 MB/s > Threads statistics: > 1 threads in total. > Min: 0.18 s, Max: 0.18 s, Avg: 0.18 s, 90th Percentile: 0.18 s. > > bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 > 50 1024 10240 > Using 120MB buffer. > ISA-L coder encode 54000MB data, with chunk size 10240KB > Total time: 11.58 s. > Total throughput: 4662 MB/s > Threads statistics: > 50 threads in total. > Min: 0.55 s, Max: 11.5 s, Avg: 6.32 s, 90th Percentile: 10.45 s. > > RawErasureCoderBenchmark shares a single coder between all concurrent > threads. While > NativeRSRawEncoder and NativeRSRawDecoder has synchronized key work on > doDecode and doEncode function. So 50 concurrent threads are forced to use > the shared coder encode/decode function one by one. > > To resolve the issue, there are two approaches. > # Refactor RawErasureCoderBenchmark to use dedicated coder for each > concurrent thread. > # Refactor NativeRSRawEncoder and NativeRSRawDecoder to get better > concurrency. Since the synchronized key work is to try to protect the > private variable nativeCoder from being checked in doEncode/doDecode and > being modified in release. We can use reentrantReadWriteLock to increase the > concurrency since doEncode/doDecode can be called multiple times without > change the nativeCoder state. > I prefer approach 2 and will upload a patch later. > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15499) Performance severe drop when running RawErasureCoderBenchmark with NativeRSRawErasureCoder
[ https://issues.apache.org/jira/browse/HADOOP-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-15499: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Performance severe drop when running RawErasureCoderBenchmark with > NativeRSRawErasureCoder > -- > > Key: HADOOP-15499 > URL: https://issues.apache.org/jira/browse/HADOOP-15499 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0, 3.0.1, 3.0.2, 3.1.1 >Reporter: SammiChen >Assignee: SammiChen >Priority: Major > Attachments: HADOOP-15499.001.patch, HADOOP-15499.002.patch > > > Run RawErasureCoderBenchmark which is a micro-benchmark to test EC codec > encoding/decoding performance. > 50 concurrency Native ISA-L coder has the less throughput than 1 concurrency > Native ISA-L case. It's abnormal. > > bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 1 > 1024 1024 > Using 126MB buffer. > ISA-L coder encode 1008MB data, with chunk size 1024KB > Total time: 0.19 s. > Total throughput: 5390.37 MB/s > Threads statistics: > 1 threads in total. > Min: 0.18 s, Max: 0.18 s, Avg: 0.18 s, 90th Percentile: 0.18 s. > > bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 > 50 1024 10240 > Using 120MB buffer. > ISA-L coder encode 54000MB data, with chunk size 10240KB > Total time: 11.58 s. > Total throughput: 4662 MB/s > Threads statistics: > 50 threads in total. > Min: 0.55 s, Max: 11.5 s, Avg: 6.32 s, 90th Percentile: 10.45 s. > > RawErasureCoderBenchmark shares a single coder between all concurrent > threads. While > NativeRSRawEncoder and NativeRSRawDecoder has synchronized key work on > doDecode and doEncode function. So 50 concurrent threads are forced to use > the shared coder encode/decode function one by one. > > To resolve the issue, there are two approaches. > # Refactor RawErasureCoderBenchmark to use dedicated coder for each > concurrent thread. > # Refactor NativeRSRawEncoder and NativeRSRawDecoder to get better > concurrency. Since the synchronized key work is to try to protect the > private variable nativeCoder from being checked in doEncode/doDecode and > being modified in release. We can use reentrantReadWriteLock to increase the > concurrency since doEncode/doDecode can be called multiple times without > change the nativeCoder state. > I prefer approach 2 and will upload a patch later. > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org