[GitHub] [hadoop] aajisaka commented on a change in pull request #3027: HDFS-16031. Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap
aajisaka commented on a change in pull request #3027: URL: https://github.com/apache/hadoop/pull/3027#discussion_r637728863 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java ## @@ -320,21 +320,18 @@ static File createSnapshot(InMemoryAliasMap aliasMap) throws IOException { private static File getCompressedAliasMap(File aliasMapDir) throws IOException { File outCompressedFile = new File(aliasMapDir.getParent(), TAR_NAME); -BufferedOutputStream bOut = null; -GzipCompressorOutputStream gzOut = null; -TarArchiveOutputStream tOut = null; -try { - bOut = new BufferedOutputStream( - Files.newOutputStream(outCompressedFile.toPath())); - gzOut = new GzipCompressorOutputStream(bOut); - tOut = new TarArchiveOutputStream(gzOut); + +try (BufferedOutputStream bOut = new BufferedOutputStream( +Files.newOutputStream(outCompressedFile.toPath())); + GzipCompressorOutputStream gzOut = new GzipCompressorOutputStream(bOut); + TarArchiveOutputStream tOut = new TarArchiveOutputStream(gzOut)){ + addFileToTarGzRecursively(tOut, aliasMapDir, "", new Configuration()); -} finally { if (tOut != null) { tOut.finish(); } Review comment: Thanks for the update. - Before: `tOut.finish()` is called if addFileToTarGzRecursively throws an exception. - Your patch: `tOut.finish()` is not called if addFileToTarGzRecursively throws an exception. I think we need to have an extra try-catch clause: ```java: try { addFileToTarGzRecursively(tOut, aliasMapDir, "", new Configuration()); } finally { tOut.finish(); } ``` tOut cannot be null in the try-with-resources clause so that we can remove the null check. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3023: HDFS-16028. Add a configuration item for special trash dir
hadoop-yetus commented on pull request #3023: URL: https://github.com/apache/hadoop/pull/3023#issuecomment-846806097 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 35s | | trunk passed | | +1 :green_heart: | compile | 20m 39s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 9s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 10s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 34s | | trunk passed | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 41s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 37s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 53s | | the patch passed | | +1 :green_heart: | compile | 20m 9s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 9s | | the patch passed | | +1 :green_heart: | compile | 18m 2s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 2s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 9s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3023/3/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 1 new + 225 unchanged - 0 fixed = 226 total (was 225) | | +1 :green_heart: | mvnsite | 1m 32s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 5s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 41s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 32s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 49s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 3s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 59s | | The patch does not generate ASF License warnings. | | | | 179m 5s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3023/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3023 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux f38537fd1c2a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 48a439494ba7ca181237e0271f41b28ef477683b | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3023/3/testReport/ | | Max. process+thread count | 1260 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3023/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically genera
[GitHub] [hadoop] hadoop-yetus commented on pull request #3027: HDFS-16031. Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap
hadoop-yetus commented on pull request #3027: URL: https://github.com/apache/hadoop/pull/3027#issuecomment-846772078 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 22s | | trunk passed | | +1 :green_heart: | compile | 1m 25s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 13s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 24s | | trunk passed | | +1 :green_heart: | javadoc | 0m 57s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 28s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 55s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 17s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 8s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 8s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 56s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3027/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 1m 13s | | the patch passed | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 23s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 22s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 48s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 351m 8s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3027/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 443m 17s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.cli.TestErasureCodingCLI | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3027/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3027 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux b6609061ac3d 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d0d81d605cadde6dd7ecc0813e2a929e31d22f97 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | T
[GitHub] [hadoop] haiyang1987 commented on pull request #3036: HDFS-15998. Fix NullPointException In listOpenFiles
haiyang1987 commented on pull request #3036: URL: https://github.com/apache/hadoop/pull/3036#issuecomment-846720962 @jojochuang Thanks for comments. Later, I'll try to add a unit tests. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17717) Update wildfly openssl to 1.1.3.Final
[ https://issues.apache.org/jira/browse/HADOOP-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-17717: - Target Version/s: 3.3.2 (was: 3.3.1) > Update wildfly openssl to 1.1.3.Final > - > > Key: HADOOP-17717 > URL: https://issues.apache.org/jira/browse/HADOOP-17717 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > HADOOP-17649 got stalled. IMO we can bump the version to 1.1.3.Final instead, > at least, for branch-3.3. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zhuxiangyi commented on a change in pull request #2981: HDFS-16008. RBF: Tool to initialize ViewFS Mapping to Router
zhuxiangyi commented on a change in pull request #2981: URL: https://github.com/apache/hadoop/pull/2981#discussion_r637659170 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java ## @@ -1036,6 +1057,83 @@ private boolean updateQuota(String mount, long nsQuota, long ssQuota) return updateResponse.getStatus(); } + /** + * Initialize the ViewFS mount point to the Router, + * either to specify a cluster or to initialize it all. + * @param clusterName The specified cluster to initialize, + * AllCluster was then all clusters. + * @return If the quota was updated. + * @throws IOException Error adding the mount point. + */ + public boolean initViewFsToMountTable(String clusterName) + throws IOException { +// fs.viewfs.mounttable.ClusterX.link./data +final String mountTablePrefix; +if (clusterName.equals(ALL_CLUSTERS)) { + mountTablePrefix = + Constants.CONFIG_VIEWFS_PREFIX + ".*" + + Constants.CONFIG_VIEWFS_LINK + "."; +} else { + mountTablePrefix = + Constants.CONFIG_VIEWFS_PREFIX + "." + clusterName + "." + + Constants.CONFIG_VIEWFS_LINK + "."; +} +final String rootPath = "/"; +Map viewFsMap = getConf().getValByRegex( +mountTablePrefix + rootPath); +if (viewFsMap.isEmpty()) { + System.out.println("There is no ViewFs mapping to initialize."); + return true; +} +for (Entry entry : viewFsMap.entrySet()) { + Path path = new Path(entry.getValue()); + URI destUri = path.toUri(); + String mountKey = entry.getKey(); + DestinationOrder order = DestinationOrder.HASH; + String mount = mountKey.replaceAll(mountTablePrefix, ""); + if (!destUri.getScheme().equals("hdfs")) { +System.out.println("Only supports HDFS, " + +"added Mount Point failed , " + mountKey); + } + if (!mount.startsWith(rootPath) || + !destUri.getPath().startsWith(rootPath)) { +System.out.println("Added Mount Point failed " + mountKey); +continue; + } + String[] nss = new String[]{destUri.getAuthority()}; + boolean added = addMount( + mount, nss, destUri.getPath(), false, + false, order, getACLEntityFormHdfsPath(path, getConf())); Review comment: @Hexiaoqiao I didn't find any problems here, can you tell me the details, thank you very much. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17699) Remove hardcoded SunX509 usage from SSLFactory
[ https://issues.apache.org/jira/browse/HADOOP-17699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-17699: - Target Version/s: 3.3.2 > Remove hardcoded SunX509 usage from SSLFactory > -- > > Key: HADOOP-17699 > URL: https://issues.apache.org/jira/browse/HADOOP-17699 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > In SSLFactory.SSLCERTIFICATE, used by FileBasedKeyStoresFactory and > ReloadingX509TrustManager, there is a hardcoded reference to "SunX509" which > is used to get a KeyManager/TrustManager. This KeyManager type might not be > available if using the other JSSE providers, e.g., in FIPS deployment. > > {code:java} > WARN org.apache.hadoop.hdfs.web.URLConnectionFactory: Cannot load customized > ssl related configuration. Fall > back to system-generic settings. > java.security.NoSuchAlgorithmException: SunX509 KeyManagerFactory not > available > at sun.security.jca.GetInstance.getInstance(GetInstance.java:159) > at javax.net.ssl.KeyManagerFactory.getInstance(KeyManagerFactory.java:137) > at > org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:186) > at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:187) > at > org.apache.hadoop.hdfs.web.SSLConnectionConfigurator.(SSLConnectionConfigurator.java:50) > at > org.apache.hadoop.hdfs.web.URLConnectionFactory.getSSLConnectionConfiguration(URLConnectionFactory.java:100) > at > org.apache.hadoop.hdfs.web.URLConnectionFactory.newDefaultURLConnectionFactory(URLConnectionFactory.java:79) > {code} > This ticket is opened to use the DefaultAlgorithm defined by Java system > property: > ssl.KeyManagerFactory.algorithm and ssl.TrustManagerFactory.algorithm. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14254) Add a Distcp option to preserve Erasure Coding attributes
[ https://issues.apache.org/jira/browse/HADOOP-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-14254: - Target Version/s: 3.4.0, 3.3.2 (was: 3.4.0) > Add a Distcp option to preserve Erasure Coding attributes > - > > Key: HADOOP-14254 > URL: https://issues.apache.org/jira/browse/HADOOP-14254 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0-alpha4 >Reporter: Wei-Chiu Chuang >Assignee: Ayush Saxena >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-14254-01.patch, HADOOP-14254-02.patch, > HADOOP-14254-03.patch, HADOOP-14254-04.patch, HADOOP-14254.test.patch, > HDFS-11472.001.patch > > > Currently Distcp does not preserve the erasure coding attributes properly. I > propose we add a "-pe" switch to ensure erasure coded files at source are > copied as erasure coded files at destination. > For example, if the src cluster has the following directories and files that > are copied to dest cluster > hdfs://src/ root directory is replicated > hdfs://src/foo erasure code enabled directory > hdfs://src/foo/bar erasure coded file > after distcp, hdfs://dest/foo and hdfs://dest/foo/bar will not be erasure > coded. > It may be useful to add such capability. One potential use is for disaster > recovery. The other use is for out-of-place cluster upgrade. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances
[ https://issues.apache.org/jira/browse/HADOOP-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-17208: - Target Version/s: 3.3.2 > LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all > KMSClientProvider instances > - > > Key: HADOOP-17208 > URL: https://issues.apache.org/jira/browse/HADOOP-17208 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.8.4 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > Without invalidateCache, the deleted key may still exists in the servers' key > cache (CachingKeyProvider in KMSWebApp.java) where the delete key was not > hit. Client may still be able to access encrypted files by specifying to > connect to KMS instances with a cached version of the deleted key before the > cache entry (10 min by default) expired. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17341) Upgrade commons-codec to 1.15
[ https://issues.apache.org/jira/browse/HADOOP-17341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-17341: - Target Version/s: 3.3.2 > Upgrade commons-codec to 1.15 > - > > Key: HADOOP-17341 > URL: https://issues.apache.org/jira/browse/HADOOP-17341 > Project: Hadoop Common > Issue Type: Bug >Reporter: Dongjoon Hyun >Assignee: Dongjoon Hyun >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > This issue aims to upgrade commons-codec to 1.15 to bring the latest bug > fixes. > - https://commons.apache.org/proper/commons-codec/changes-report.html#a1.15 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17259) Allow SSLFactory fallback to input config if ssl-*.xml fail to load from classpath
[ https://issues.apache.org/jira/browse/HADOOP-17259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-17259: - Target Version/s: 3.3.2 > Allow SSLFactory fallback to input config if ssl-*.xml fail to load from > classpath > -- > > Key: HADOOP-17259 > URL: https://issues.apache.org/jira/browse/HADOOP-17259 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.8.5 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > Some applications like Tez does not have ssl-client.xml and ssl-server.xml in > classpath. Instead, it directly pass the parsed SSL configuration as the > input configuration object. This ticket is opened to allow this case. > TEZ-4096 attempts to solve this issue but but take a different approach which > may not work in existing Hadoop clients that use SSLFactory from > hadoop-common. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17282) libzstd-dev should be used instead of libzstd1-dev on Ubuntu 18.04 or higher
[ https://issues.apache.org/jira/browse/HADOOP-17282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-17282: - Target Version/s: 3.3.2 > libzstd-dev should be used instead of libzstd1-dev on Ubuntu 18.04 or higher > > > Key: HADOOP-17282 > URL: https://issues.apache.org/jira/browse/HADOOP-17282 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Takeru Kuramoto >Assignee: Takeru Kuramoto >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h > Remaining Estimate: 0h > > libzstd1-dev is a transitional package on Ubuntu 18.04. > It is better to use libzstd-dev instead of libzstd1-dev in the Dockerfile > (dev-support/docker/Dockerfile). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17552) Change ipc.client.rpc-timeout.ms from 0 to 120000 by default to avoid potential hang
[ https://issues.apache.org/jira/browse/HADOOP-17552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-17552: - Target Version/s: 3.3.2 > Change ipc.client.rpc-timeout.ms from 0 to 12 by default to avoid > potential hang > > > Key: HADOOP-17552 > URL: https://issues.apache.org/jira/browse/HADOOP-17552 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Affects Versions: 3.2.2 >Reporter: Haoze Wu >Assignee: Haoze Wu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 9h 10m > Remaining Estimate: 0h > > We are doing some systematic fault injection testing in Hadoop-3.2.2 and > when we try to run a client (e.g., `bin/hdfs dfs -ls /`) to our HDFS cluster > (1 NameNode, 2 DataNodes), the client gets stuck forever. After some > investigation, we believe that it’s a bug in `hadoop.ipc.Client` because the > read method of `hadoop.ipc.Client$Connection$PingInputStream` keeps > swallowing `java.net.SocketTimeoutException` due to the mistaken usage of the > `rpcTimeout` configuration in the `handleTimeout` method. > *Reproduction* > Start HDFS with the default configuration. Then execute a client (we used > the command `bin/hdfs dfs -ls /` in the terminal). While HDFS is trying to > accept the client’s socket, inject a socket error (java.net.SocketException > or java.io.IOException), specifically at line 1402 (line 1403 or 1404 will > also work). > We prepare the scripts for reproduction in a gist > ([https://gist.github.com/functioner/08bcd86491b8ff32860eafda8c140e24]). > *Diagnosis* > When the NameNode tries to accept a client’s socket, basically there are > 4 steps: > # accept the socket (line 1400) > # configure the socket (line 1402-1404) > # make the socket a Reader (after line 1404) > # swallow the possible IOException in line 1350 > {code:java} > //hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java > public void run() { > while (running) { > SelectionKey key = null; > try { > getSelector().select(); > Iterator iter = > getSelector().selectedKeys().iterator(); > while (iter.hasNext()) { > key = iter.next(); > iter.remove(); > try { > if (key.isValid()) { > if (key.isAcceptable()) > doAccept(key); > } > } catch (IOException e) { // line 1350 > } > key = null; > } > } catch (OutOfMemoryError e) { > // ... > } catch (Exception e) { > // ... > } > } > } > void doAccept(SelectionKey key) throws InterruptedException, IOException, > OutOfMemoryError { > ServerSocketChannel server = (ServerSocketChannel) key.channel(); > SocketChannel channel; > while ((channel = server.accept()) != null) { // line 1400 > channel.configureBlocking(false); // line 1402 > channel.socket().setTcpNoDelay(tcpNoDelay); // line 1403 > channel.socket().setKeepAlive(true); // line 1404 > > Reader reader = getReader(); > Connection c = connectionManager.register(channel, > this.listenPort, this.isOnAuxiliaryPort); > // If the connectionManager can't take it, close the connection. > if (c == null) { > if (channel.isOpen()) { > IOUtils.cleanup(null, channel); > } > connectionManager.droppedConnections.getAndIncrement(); > continue; > } > key.attach(c); // so closeCurrentConnection can get the object > reader.addConnection(c); > } > } > {code} > When a SocketException occurs in line 1402 (or 1403 or 1404), the > server.accept() in line 1400 has finished, so we expect the following > behavior: > # The server (NameNode) accepts this connection but it will basically write > nothing to this connection because it’s not added as a Reader data structure. > # The client is aware that the connection has been established, and tries to > read and write in this connection. After some time threshold, the client > finds that it can’t read anything from this connection and exits with some > exception or error. > However, we do not observe behavior 2. The client just gets stuck forever > (>10min). We re-examine the default configuration in > [https://hadoop.apache.org/docs/r3.2.2/hadoop-project-dist/hadoop-common/core-default.xml] > and we believe that the client should be able to time out i
[jira] [Updated] (HADOOP-17044) Revert "HADOOP-8143. Change distcp to have -pb on by default"
[ https://issues.apache.org/jira/browse/HADOOP-17044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-17044: - Release Note: Distcp block size is not preserved by default, unless -pb is specified. This restores the behavior prior to Hadoop 3. > Revert "HADOOP-8143. Change distcp to have -pb on by default" > - > > Key: HADOOP-17044 > URL: https://issues.apache.org/jira/browse/HADOOP-17044 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.0.3, 3.3.0, 3.2.1, 3.1.3 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 3.0.4, 3.2.2, 3.3.1, 3.1.5 > > > revert the HADOOP-8143. "distcp -pb as default" feature as it was > * breaking s3a uploads > * breaking incremental uploads to any object store -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Nargeshdb commented on pull request #3027: HDFS-16031. Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap
Nargeshdb commented on pull request #3027: URL: https://github.com/apache/hadoop/pull/3027#issuecomment-846635685 > Thanks for the patch. @aajisaka Thanks for the feedback. > We should use try-with-resources to close the resources. https://github.com/apache/hadoop/pull/3027/commits/d0d81d605cadde6dd7ecc0813e2a929e31d22f97 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3044: Revert "Revert "HDFS-15971. Make mkstemp cross platform (#2898)""
hadoop-yetus commented on pull request #3044: URL: https://github.com/apache/hadoop/pull/3044#issuecomment-846616162 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 13m 44s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 6 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 31s | | trunk passed | | +1 :green_heart: | compile | 2m 51s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 2m 54s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 58m 23s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 14s | | the patch passed | | +1 :green_heart: | compile | 2m 43s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | cc | 2m 43s | | the patch passed | | +1 :green_heart: | golang | 2m 43s | | the patch passed | | +1 :green_heart: | javac | 2m 43s | | the patch passed | | +1 :green_heart: | compile | 2m 47s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | cc | 2m 47s | | the patch passed | | +1 :green_heart: | golang | 2m 47s | | the patch passed | | +1 :green_heart: | javac | 2m 47s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 40s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 102m 44s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 199m 21s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3044/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3044 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux 6a91a863a395 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5ad00b64df6995a4bf0bfb23b34fc8a66022ddbf | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3044/1/testReport/ | | Max. process+thread count | 586 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3044/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17727) Modularize docker images
[ https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=600955&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600955 ] ASF GitHub Bot logged work on HADOOP-17727: --- Author: ASF GitHub Bot Created on: 23/May/21 18:04 Start Date: 23/May/21 18:04 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3043: URL: https://github.com/apache/hadoop/pull/3043#issuecomment-846601944 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 37s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 46s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 0s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | hadolint | 0m 5s | | No new issues. | | +1 :green_heart: | mvnsite | 0m 0s | | the patch passed | | +1 :green_heart: | pylint | 0m 2s | | No new issues. | | +1 :green_heart: | shellcheck | 0m 1s | | No new issues. | | +1 :green_heart: | shadedclient | 15m 7s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | asflicense | 0m 28s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/6/artifact/out/results-asflicense.txt) | The patch generated 3 ASF License warnings. | | | | 69m 54s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3043 | | Optional Tests | dupname asflicense codespell hadolint shellcheck shelldocs mvnsite unit pylint | | uname | Linux 9023a2be9897 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dbc09e74fee1f771a222dfbc999ccc2e302b8012 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/6/testReport/ | | Max. process+thread count | 516 (vs. ulimit of 5500) | | modules | C: U: | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/6/console | | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 hadolint=1.11.1-0-g0e692dd pylint=2.6.0 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 600955) Time Spent: 2h (was: 1h 50m) > Modularize docker images > > > Key: HADOOP-17727 > URL: https://issues.apache.org/jira/browse/HADOOP-17727 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > We're now creating the *Dockerfile*s for different platforms. We need a way > to manage the packages in a clean way as maintaining the packages for all the > different environments becomes cumbersome. -
[GitHub] [hadoop] hadoop-yetus commented on pull request #3043: HADOOP-17727. Modularize docker images
hadoop-yetus commented on pull request #3043: URL: https://github.com/apache/hadoop/pull/3043#issuecomment-846601944 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 37s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 46s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 0s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | hadolint | 0m 5s | | No new issues. | | +1 :green_heart: | mvnsite | 0m 0s | | the patch passed | | +1 :green_heart: | pylint | 0m 2s | | No new issues. | | +1 :green_heart: | shellcheck | 0m 1s | | No new issues. | | +1 :green_heart: | shadedclient | 15m 7s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | asflicense | 0m 28s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/6/artifact/out/results-asflicense.txt) | The patch generated 3 ASF License warnings. | | | | 69m 54s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3043 | | Optional Tests | dupname asflicense codespell hadolint shellcheck shelldocs mvnsite unit pylint | | uname | Linux 9023a2be9897 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dbc09e74fee1f771a222dfbc999ccc2e302b8012 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/6/testReport/ | | Max. process+thread count | 516 (vs. ulimit of 5500) | | modules | C: U: | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/6/console | | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 hadolint=1.11.1-0-g0e692dd pylint=2.6.0 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GauthamBanasandra closed pull request #3014: HDFS-16026. Restore cross platform mkstemp
GauthamBanasandra closed pull request #3014: URL: https://github.com/apache/hadoop/pull/3014 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GauthamBanasandra commented on pull request #3014: HDFS-16026. Restore cross platform mkstemp
GauthamBanasandra commented on pull request #3014: URL: https://github.com/apache/hadoop/pull/3014#issuecomment-846593890 Abandoning this. Will be handled in https://github.com/apache/hadoop/pull/3044. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2767: HDFS-15790. Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
hadoop-yetus commented on pull request #2767: URL: https://github.com/apache/hadoop/pull/2767#issuecomment-846593536 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 44s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | buf | 0m 0s | | buf was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 6 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 58s | | trunk passed | | +1 :green_heart: | compile | 20m 50s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 0s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 9s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 34s | | trunk passed | | +1 :green_heart: | javadoc | 1m 7s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 42s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 52s | | the patch passed | | +1 :green_heart: | compile | 20m 8s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | cc | 20m 8s | [/results-compile-cc-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2767/8/artifact/out/results-compile-cc-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 22 new + 305 unchanged - 22 fixed = 327 total (was 327) | | +1 :green_heart: | javac | 20m 8s | | the patch passed | | +1 :green_heart: | compile | 18m 11s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | cc | 18m 11s | [/results-compile-cc-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2767/8/artifact/out/results-compile-cc-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 34 new + 293 unchanged - 34 fixed = 327 total (was 327) | | +1 :green_heart: | javac | 18m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 8s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2767/8/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 2 new + 210 unchanged - 7 fixed = 212 total (was 217) | | +1 :green_heart: | mvnsite | 1m 34s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 5s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 39s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 32s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 52s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 8s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 59s | | The patch does not generate ASF License warnings. | | | | 179m 1s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2767/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2767 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml spotbugs checkstyle cc buflint bufcompat | | uname | Linux 31674546d4ed 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39
[jira] [Work logged] (HADOOP-17727) Modularize docker images
[ https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=600945&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600945 ] ASF GitHub Bot logged work on HADOOP-17727: --- Author: ASF GitHub Bot Created on: 23/May/21 16:55 Start Date: 23/May/21 16:55 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3043: URL: https://github.com/apache/hadoop/pull/3043#issuecomment-846593246 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/6/console in case of problems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 600945) Time Spent: 1h 50m (was: 1h 40m) > Modularize docker images > > > Key: HADOOP-17727 > URL: https://issues.apache.org/jira/browse/HADOOP-17727 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > We're now creating the *Dockerfile*s for different platforms. We need a way > to manage the packages in a clean way as maintaining the packages for all the > different environments becomes cumbersome. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3043: HADOOP-17727. Modularize docker images
hadoop-yetus commented on pull request #3043: URL: https://github.com/apache/hadoop/pull/3043#issuecomment-846593246 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/6/console in case of problems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GauthamBanasandra opened a new pull request #3044: Revert "Revert "HDFS-15971. Make mkstemp cross platform (#2898)""
GauthamBanasandra opened a new pull request #3044: URL: https://github.com/apache/hadoop/pull/3044 * This reverts commit aed13f0f42fefe30a53eb73c65c2072a031f173e. * Verified by building locally on Centos 7 that Hadoop builds fine with this PR. * Build log - https://issues.apache.org/jira/secure/attachment/13025814/build-log.zip Reverted commit - https://issues.apache.org/jira/secure/attachment/13025815/commit-details.txt Dockerfile_centos_7 - https://issues.apache.org/jira/secure/attachment/13025816/Dockerfile_centos_7 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17727) Modularize docker images
[ https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=600936&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600936 ] ASF GitHub Bot logged work on HADOOP-17727: --- Author: ASF GitHub Bot Created on: 23/May/21 14:43 Start Date: 23/May/21 14:43 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3043: URL: https://github.com/apache/hadoop/pull/3043#issuecomment-846574131 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | docker | 0m 30s | | Docker failed to build yetus/hadoop:ad923ad5642. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/3043 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/5/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 600936) Time Spent: 1h 40m (was: 1.5h) > Modularize docker images > > > Key: HADOOP-17727 > URL: https://issues.apache.org/jira/browse/HADOOP-17727 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > We're now creating the *Dockerfile*s for different platforms. We need a way > to manage the packages in a clean way as maintaining the packages for all the > different environments becomes cumbersome. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3043: HADOOP-17727. Modularize docker images
hadoop-yetus commented on pull request #3043: URL: https://github.com/apache/hadoop/pull/3043#issuecomment-846574131 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | docker | 0m 30s | | Docker failed to build yetus/hadoop:ad923ad5642. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/3043 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/5/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17727) Modularize docker images
[ https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=600935&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600935 ] ASF GitHub Bot logged work on HADOOP-17727: --- Author: ASF GitHub Bot Created on: 23/May/21 14:42 Start Date: 23/May/21 14:42 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3043: URL: https://github.com/apache/hadoop/pull/3043#issuecomment-846574028 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/5/console in case of problems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 600935) Time Spent: 1.5h (was: 1h 20m) > Modularize docker images > > > Key: HADOOP-17727 > URL: https://issues.apache.org/jira/browse/HADOOP-17727 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > We're now creating the *Dockerfile*s for different platforms. We need a way > to manage the packages in a clean way as maintaining the packages for all the > different environments becomes cumbersome. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3043: HADOOP-17727. Modularize docker images
hadoop-yetus commented on pull request #3043: URL: https://github.com/apache/hadoop/pull/3043#issuecomment-846574028 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/5/console in case of problems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17727) Modularize docker images
[ https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=600931&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600931 ] ASF GitHub Bot logged work on HADOOP-17727: --- Author: ASF GitHub Bot Created on: 23/May/21 14:07 Start Date: 23/May/21 14:07 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3043: URL: https://github.com/apache/hadoop/pull/3043#issuecomment-846568988 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 20m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 30s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 25s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 0s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -1 :x: | hadolint | 0m 5s | [/results-hadolint.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/4/artifact/out/results-hadolint.txt) | The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 0m 0s | | the patch passed | | +1 :green_heart: | pylint | 0m 2s | | No new issues. | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | shadedclient | 15m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | asflicense | 0m 30s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/4/artifact/out/results-asflicense.txt) | The patch generated 3 ASF License warnings. | | | | 89m 28s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3043 | | Optional Tests | dupname asflicense codespell hadolint shellcheck shelldocs mvnsite unit pylint | | uname | Linux 74a3cd703c2f 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 081f621be31b92a8893bf1933d56a85f6aae307c | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/4/testReport/ | | Max. process+thread count | 516 (vs. ulimit of 5500) | | modules | C: U: | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/4/console | | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 hadolint=1.11.1-0-g0e692dd pylint=2.6.0 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 600931) Time Spent: 1h 20m (was: 1h 10m) > Modularize docker images > > > Key: HADOOP-17727 > URL: https://issues.apache.org/jira/browse/HADOOP-17727 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > We're now creating the *Docker
[GitHub] [hadoop] hadoop-yetus commented on pull request #3043: HADOOP-17727. Modularize docker images
hadoop-yetus commented on pull request #3043: URL: https://github.com/apache/hadoop/pull/3043#issuecomment-846568988 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 20m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 30s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 25s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 0s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -1 :x: | hadolint | 0m 5s | [/results-hadolint.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/4/artifact/out/results-hadolint.txt) | The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 0m 0s | | the patch passed | | +1 :green_heart: | pylint | 0m 2s | | No new issues. | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | shadedclient | 15m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | asflicense | 0m 30s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/4/artifact/out/results-asflicense.txt) | The patch generated 3 ASF License warnings. | | | | 89m 28s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3043 | | Optional Tests | dupname asflicense codespell hadolint shellcheck shelldocs mvnsite unit pylint | | uname | Linux 74a3cd703c2f 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 081f621be31b92a8893bf1933d56a85f6aae307c | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/4/testReport/ | | Max. process+thread count | 516 (vs. ulimit of 5500) | | modules | C: U: | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/4/console | | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 hadolint=1.11.1-0-g0e692dd pylint=2.6.0 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vinayakumarb commented on a change in pull request #2767: HDFS-15790. Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
vinayakumarb commented on a change in pull request #2767: URL: https://github.com/apache/hadoop/pull/2767#discussion_r637551571 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java ## @@ -937,11 +937,18 @@ public int hashCode() { */ static class ProtoClassProtoImpl { final Class protocolClass; - final Object protocolImpl; + final Object protocolImpl; + private final boolean newPBImpl; + ProtoClassProtoImpl(Class protocolClass, Object protocolImpl) { this.protocolClass = protocolClass; this.protocolImpl = protocolImpl; +this.newPBImpl = protocolImpl instanceof BlockingService; } + + public boolean isNewPBImpl() { Review comment: Done ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java ## @@ -443,144 +430,52 @@ public Server(Class protocolClass, Object protocolImpl, SecretManager secretManager, String portRangeConfig, AlignmentContext alignmentContext) throws IOException { - super(bindAddress, port, null, numHandlers, - numReaders, queueSizePerHandler, conf, - serverNameFromClass(protocolImpl.getClass()), secretManager, - portRangeConfig); - setAlignmentContext(alignmentContext); - this.verbose = verbose; - registerProtocolAndImpl(RPC.RpcKind.RPC_PROTOCOL_BUFFER, protocolClass, - protocolImpl); + super(protocolClass, protocolImpl, conf, bindAddress, port, numHandlers, + numReaders, queueSizePerHandler, verbose, secretManager, + portRangeConfig, alignmentContext); } -@Override -protected RpcInvoker getServerRpcInvoker(RpcKind rpcKind) { - if (rpcKind == RpcKind.RPC_PROTOCOL_BUFFER) { -return RPC_INVOKER; - } - return super.getServerRpcInvoker(rpcKind); -} - -/** - * Protobuf invoker for {@link RpcInvoker} - */ -static class ProtoBufRpcInvoker implements RpcInvoker { - private static ProtoClassProtoImpl getProtocolImpl(RPC.Server server, - String protoName, long clientVersion) throws RpcServerException { -ProtoNameVer pv = new ProtoNameVer(protoName, clientVersion); -ProtoClassProtoImpl impl = -server.getProtocolImplMap(RPC.RpcKind.RPC_PROTOCOL_BUFFER).get(pv); -if (impl == null) { // no match for Protocol AND Version - VerProtocolImpl highest = - server.getHighestSupportedProtocol(RPC.RpcKind.RPC_PROTOCOL_BUFFER, - protoName); - if (highest == null) { -throw new RpcNoSuchProtocolException( -"Unknown protocol: " + protoName); - } - // protocol supported but not the version that client wants - throw new RPC.VersionMismatch(protoName, clientVersion, - highest.version); -} -return impl; +static RpcWritable processCall(RPC.Server server, Review comment: Done. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vinayakumarb commented on a change in pull request #2767: HDFS-15790. Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
vinayakumarb commented on a change in pull request #2767: URL: https://github.com/apache/hadoop/pull/2767#discussion_r637551551 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine2.java ## @@ -495,6 +524,7 @@ private static ProtoClassProtoImpl getProtocolImpl(RPC.Server server, * it is. * */ + @SuppressWarnings("deprecation") Review comment: This was to suppress javac warnings due to usage of {{ProtobufRpcEngine}} in {{ProtobufRpcEngine2}}. I have moved to new method, where this exact usage is present. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17725) Improve error message for token providers in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-17725?focusedWorklogId=600927&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600927 ] ASF GitHub Bot logged work on HADOOP-17725: --- Author: ASF GitHub Bot Created on: 23/May/21 12:56 Start Date: 23/May/21 12:56 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #3041: URL: https://github.com/apache/hadoop/pull/3041#issuecomment-846559147 > I would appreciate it if you could address the comments, thanks. Thanks for the review @sadikovi, I missed out of on other token providers initially. Please take a look. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 600927) Time Spent: 1h 20m (was: 1h 10m) > Improve error message for token providers in ABFS > - > > Key: HADOOP-17725 > URL: https://issues.apache.org/jira/browse/HADOOP-17725 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure, hadoop-thirdparty >Affects Versions: 3.3.0 >Reporter: Ivan >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > It would be good to improve error messages for token providers in ABFS. > Currently, when a configuration key is not found or mistyped, the error is > not very clear on what went wrong. It would be good to indicate that the key > was required but not found in Hadoop configuration when creating a token > provider. > For example, when running the following code: > {code:java} > import org.apache.hadoop.conf._ > import org.apache.hadoop.fs._ > val conf = new Configuration() > conf.set("fs.azure.account.auth.type", "OAuth") > conf.set("fs.azure.account.oauth.provider.type", > "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider") > conf.set("fs.azure.account.oauth2.client.id", "my-client-id") > // > conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net", > "my-secret") > conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint") > val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/") > val fs = path.getFileSystem(conf) > fs.getFileStatus(path){code} > The following exception is thrown: > {code:java} > TokenAccessProviderException: Unable to load OAuth token provider class. > ... > Caused by: UncheckedExecutionException: java.lang.NullPointerException: > clientSecret > ... > Caused by: NullPointerException: clientSecret {code} > which does not tell what configuration key was not loaded. > > IMHO, it would be good if the exception was something like this: > {code:java} > TokenAccessProviderException: Unable to load OAuth token provider class. > ... > Caused by: ConfigurationPropertyNotFoundException: Configuration property > fs.azure.account.oauth2.client.secret not found. {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #3041: HADOOP-17725. Improve error message for token providers in ABFS
virajjasani commented on pull request #3041: URL: https://github.com/apache/hadoop/pull/3041#issuecomment-846559147 > I would appreciate it if you could address the comments, thanks. Thanks for the review @sadikovi, I missed out of on other token providers initially. Please take a look. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17725) Improve error message for token providers in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-17725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17350021#comment-17350021 ] Viraj Jasani edited comment on HADOOP-17725 at 5/23/21, 12:55 PM: -- [~ste...@apache.org] Would you like to take a look at https://github.com/apache/hadoop/pull/3041? Thanks was (Author: vjasani): [~ste...@apache.org] Would you like to take a look? Thanks > Improve error message for token providers in ABFS > - > > Key: HADOOP-17725 > URL: https://issues.apache.org/jira/browse/HADOOP-17725 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure, hadoop-thirdparty >Affects Versions: 3.3.0 >Reporter: Ivan >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > It would be good to improve error messages for token providers in ABFS. > Currently, when a configuration key is not found or mistyped, the error is > not very clear on what went wrong. It would be good to indicate that the key > was required but not found in Hadoop configuration when creating a token > provider. > For example, when running the following code: > {code:java} > import org.apache.hadoop.conf._ > import org.apache.hadoop.fs._ > val conf = new Configuration() > conf.set("fs.azure.account.auth.type", "OAuth") > conf.set("fs.azure.account.oauth.provider.type", > "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider") > conf.set("fs.azure.account.oauth2.client.id", "my-client-id") > // > conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net", > "my-secret") > conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint") > val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/") > val fs = path.getFileSystem(conf) > fs.getFileStatus(path){code} > The following exception is thrown: > {code:java} > TokenAccessProviderException: Unable to load OAuth token provider class. > ... > Caused by: UncheckedExecutionException: java.lang.NullPointerException: > clientSecret > ... > Caused by: NullPointerException: clientSecret {code} > which does not tell what configuration key was not loaded. > > IMHO, it would be good if the exception was something like this: > {code:java} > TokenAccessProviderException: Unable to load OAuth token provider class. > ... > Caused by: ConfigurationPropertyNotFoundException: Configuration property > fs.azure.account.oauth2.client.secret not found. {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17727) Modularize docker images
[ https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=600925&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600925 ] ASF GitHub Bot logged work on HADOOP-17727: --- Author: ASF GitHub Bot Created on: 23/May/21 12:38 Start Date: 23/May/21 12:38 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3043: URL: https://github.com/apache/hadoop/pull/3043#issuecomment-846556840 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/4/console in case of problems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 600925) Time Spent: 1h 10m (was: 1h) > Modularize docker images > > > Key: HADOOP-17727 > URL: https://issues.apache.org/jira/browse/HADOOP-17727 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > We're now creating the *Dockerfile*s for different platforms. We need a way > to manage the packages in a clean way as maintaining the packages for all the > different environments becomes cumbersome. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3043: HADOOP-17727. Modularize docker images
hadoop-yetus commented on pull request #3043: URL: https://github.com/apache/hadoop/pull/3043#issuecomment-846556840 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3043/4/console in case of problems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17727) Modularize docker images
[ https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=600924&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600924 ] ASF GitHub Bot logged work on HADOOP-17727: --- Author: ASF GitHub Bot Created on: 23/May/21 12:32 Start Date: 23/May/21 12:32 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3043: URL: https://github.com/apache/hadoop/pull/3043#issuecomment-846555857 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 600924) Time Spent: 1h (was: 50m) > Modularize docker images > > > Key: HADOOP-17727 > URL: https://issues.apache.org/jira/browse/HADOOP-17727 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > We're now creating the *Dockerfile*s for different platforms. We need a way > to manage the packages in a clean way as maintaining the packages for all the > different environments becomes cumbersome. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3043: HADOOP-17727. Modularize docker images
hadoop-yetus commented on pull request #3043: URL: https://github.com/apache/hadoop/pull/3043#issuecomment-846555857 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17725) Improve error message for token providers in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-17725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17350021#comment-17350021 ] Viraj Jasani commented on HADOOP-17725: --- [~ste...@apache.org] Would you like to take a look? Thanks > Improve error message for token providers in ABFS > - > > Key: HADOOP-17725 > URL: https://issues.apache.org/jira/browse/HADOOP-17725 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure, hadoop-thirdparty >Affects Versions: 3.3.0 >Reporter: Ivan >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > It would be good to improve error messages for token providers in ABFS. > Currently, when a configuration key is not found or mistyped, the error is > not very clear on what went wrong. It would be good to indicate that the key > was required but not found in Hadoop configuration when creating a token > provider. > For example, when running the following code: > {code:java} > import org.apache.hadoop.conf._ > import org.apache.hadoop.fs._ > val conf = new Configuration() > conf.set("fs.azure.account.auth.type", "OAuth") > conf.set("fs.azure.account.oauth.provider.type", > "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider") > conf.set("fs.azure.account.oauth2.client.id", "my-client-id") > // > conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net", > "my-secret") > conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint") > val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/") > val fs = path.getFileSystem(conf) > fs.getFileStatus(path){code} > The following exception is thrown: > {code:java} > TokenAccessProviderException: Unable to load OAuth token provider class. > ... > Caused by: UncheckedExecutionException: java.lang.NullPointerException: > clientSecret > ... > Caused by: NullPointerException: clientSecret {code} > which does not tell what configuration key was not loaded. > > IMHO, it would be good if the exception was something like this: > {code:java} > TokenAccessProviderException: Unable to load OAuth token provider class. > ... > Caused by: ConfigurationPropertyNotFoundException: Configuration property > fs.azure.account.oauth2.client.secret not found. {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on a change in pull request #2767: HDFS-15790. Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
jojochuang commented on a change in pull request #2767: URL: https://github.com/apache/hadoop/pull/2767#discussion_r637485155 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine2.java ## @@ -495,6 +524,7 @@ private static ProtoClassProtoImpl getProtocolImpl(RPC.Server server, * it is. * */ + @SuppressWarnings("deprecation") Review comment: It would help a lot for the applications using the new ProtobufRpcEngine2 what API replaces this deprecated API. Maybe it can be written in the javadoc; or in the release note. ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java ## @@ -443,144 +430,52 @@ public Server(Class protocolClass, Object protocolImpl, SecretManager secretManager, String portRangeConfig, AlignmentContext alignmentContext) throws IOException { - super(bindAddress, port, null, numHandlers, - numReaders, queueSizePerHandler, conf, - serverNameFromClass(protocolImpl.getClass()), secretManager, - portRangeConfig); - setAlignmentContext(alignmentContext); - this.verbose = verbose; - registerProtocolAndImpl(RPC.RpcKind.RPC_PROTOCOL_BUFFER, protocolClass, - protocolImpl); + super(protocolClass, protocolImpl, conf, bindAddress, port, numHandlers, + numReaders, queueSizePerHandler, verbose, secretManager, + portRangeConfig, alignmentContext); } -@Override -protected RpcInvoker getServerRpcInvoker(RpcKind rpcKind) { - if (rpcKind == RpcKind.RPC_PROTOCOL_BUFFER) { -return RPC_INVOKER; - } - return super.getServerRpcInvoker(rpcKind); -} - -/** - * Protobuf invoker for {@link RpcInvoker} - */ -static class ProtoBufRpcInvoker implements RpcInvoker { - private static ProtoClassProtoImpl getProtocolImpl(RPC.Server server, - String protoName, long clientVersion) throws RpcServerException { -ProtoNameVer pv = new ProtoNameVer(protoName, clientVersion); -ProtoClassProtoImpl impl = -server.getProtocolImplMap(RPC.RpcKind.RPC_PROTOCOL_BUFFER).get(pv); -if (impl == null) { // no match for Protocol AND Version - VerProtocolImpl highest = - server.getHighestSupportedProtocol(RPC.RpcKind.RPC_PROTOCOL_BUFFER, - protoName); - if (highest == null) { -throw new RpcNoSuchProtocolException( -"Unknown protocol: " + protoName); - } - // protocol supported but not the version that client wants - throw new RPC.VersionMismatch(protoName, clientVersion, - highest.version); -} -return impl; +static RpcWritable processCall(RPC.Server server, Review comment: can you add a comment here that this is practically the same as ProtobufRpccEngine2.call() except the Message class, and that if this method is modified, the other method should be updated as well? (Or add the comment in the ProtobufRpccEngine2.call()) ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java ## @@ -937,11 +937,18 @@ public int hashCode() { */ static class ProtoClassProtoImpl { final Class protocolClass; - final Object protocolImpl; + final Object protocolImpl; + private final boolean newPBImpl; + ProtoClassProtoImpl(Class protocolClass, Object protocolImpl) { this.protocolClass = protocolClass; this.protocolImpl = protocolImpl; +this.newPBImpl = protocolImpl instanceof BlockingService; } + + public boolean isNewPBImpl() { Review comment: Might be easier to understand to call it "isShadedPBImpl()" instead -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context
[ https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=600922&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600922 ] ASF GitHub Bot logged work on HADOOP-17511: --- Author: ASF GitHub Bot Created on: 23/May/21 11:59 Start Date: 23/May/21 11:59 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2807: URL: https://github.com/apache/hadoop/pull/2807#issuecomment-846551528 thank's for the reviews, comments, votes etc. I'll address all of @mehakmeet's little details, push up a rebased/squashed PR to force it through yetus, then merge -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 600922) Time Spent: 19.5h (was: 19h 20m) > Add an Audit plugin point for S3A auditing/context > -- > > Key: HADOOP-17511 > URL: https://issues.apache.org/jira/browse/HADOOP-17511 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 19.5h > Remaining Estimate: 0h > > Add a way for auditing tools to correlate S3 object calls with Hadoop FS API > calls. > Initially just to log/forward to an auditing service. > Later: let us attach them as parameters in S3 requests, such as opentrace > headeers or (my initial idea: http referrer header -where it will get into > the log) > Challenges > * ensuring the audit span is created for every public entry point. That will > have to include those used in s3guard tools, some defacto public APIs > * and not re-entered for active spans. s3A code must not call back into the > FS API points > * Propagation across worker threads -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector
steveloughran commented on pull request #2807: URL: https://github.com/apache/hadoop/pull/2807#issuecomment-846551528 thank's for the reviews, comments, votes etc. I'll address all of @mehakmeet's little details, push up a rebased/squashed PR to force it through yetus, then merge -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17728) Deadlock in FileSystem StatisticsDataReferenceCleaner cleanUp
[ https://issues.apache.org/jira/browse/HADOOP-17728?focusedWorklogId=600921&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600921 ] ASF GitHub Bot logged work on HADOOP-17728: --- Author: ASF GitHub Bot Created on: 23/May/21 11:58 Start Date: 23/May/21 11:58 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #3042: URL: https://github.com/apache/hadoop/pull/3042#issuecomment-846551321 I'm going to pull @liuml07 in to help review this, as it's a bit of code they've gone near in the past and weak reference queues are new to me -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 600921) Time Spent: 1h 40m (was: 1.5h) > Deadlock in FileSystem StatisticsDataReferenceCleaner cleanUp > - > > Key: HADOOP-17728 > URL: https://issues.apache.org/jira/browse/HADOOP-17728 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 3.2.1 >Reporter: yikf >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > Cleaner thread will be blocked if we remove reference from ReferenceQueue > unless the `queue.enqueue` called. > > As shown below, We call ReferenceQueue.remove() now while cleanUp, Call > chain as follow: > *StatisticsDataReferenceCleaner#queue.remove() -> > ReferenceQueue.remove(0) -> lock.wait(0)* > But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread > will be blocked. > > ThreadDump: > {code:java} > "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 > nid=0x2119 in Object.wait() [0x7f7b0023] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock) > at java.lang.Object.wait(Object.java:502) > at java.lang.ref.Reference.tryHandlePending(Reference.java:191) > - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock) > at > java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #3042: HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner cleanUp
steveloughran commented on pull request #3042: URL: https://github.com/apache/hadoop/pull/3042#issuecomment-846551321 I'm going to pull @liuml07 in to help review this, as it's a bit of code they've gone near in the past and weak reference queues are new to me -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17728) Deadlock in FileSystem StatisticsDataReferenceCleaner cleanUp
[ https://issues.apache.org/jira/browse/HADOOP-17728?focusedWorklogId=600920&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600920 ] ASF GitHub Bot logged work on HADOOP-17728: --- Author: ASF GitHub Bot Created on: 23/May/21 11:49 Start Date: 23/May/21 11:49 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #3042: URL: https://github.com/apache/hadoop/pull/3042#discussion_r637533874 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java ## @@ -4004,12 +4004,14 @@ public void cleanUp() { * Background action to act on references being removed. */ private static class StatisticsDataReferenceCleaner implements Runnable { + private static int REF_QUEUE_POLL_TIMEOUT = 100; Review comment: that's going to be waking every 100 milliseconds, demanding cpu time etc etc. If there has to be a timeout, it needs to be something less disruptive, like 100 seconds. What would happen if that was the case? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 600920) Time Spent: 1.5h (was: 1h 20m) > Deadlock in FileSystem StatisticsDataReferenceCleaner cleanUp > - > > Key: HADOOP-17728 > URL: https://issues.apache.org/jira/browse/HADOOP-17728 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 3.2.1 >Reporter: yikf >Priority: Minor > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > Cleaner thread will be blocked if we remove reference from ReferenceQueue > unless the `queue.enqueue` called. > > As shown below, We call ReferenceQueue.remove() now while cleanUp, Call > chain as follow: > *StatisticsDataReferenceCleaner#queue.remove() -> > ReferenceQueue.remove(0) -> lock.wait(0)* > But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread > will be blocked. > > ThreadDump: > {code:java} > "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 > nid=0x2119 in Object.wait() [0x7f7b0023] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock) > at java.lang.Object.wait(Object.java:502) > at java.lang.ref.Reference.tryHandlePending(Reference.java:191) > - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock) > at > java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #3042: HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner cleanUp
steveloughran commented on a change in pull request #3042: URL: https://github.com/apache/hadoop/pull/3042#discussion_r637533874 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java ## @@ -4004,12 +4004,14 @@ public void cleanUp() { * Background action to act on references being removed. */ private static class StatisticsDataReferenceCleaner implements Runnable { + private static int REF_QUEUE_POLL_TIMEOUT = 100; Review comment: that's going to be waking every 100 milliseconds, demanding cpu time etc etc. If there has to be a timeout, it needs to be something less disruptive, like 100 seconds. What would happen if that was the case? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17728) Deadlock in FileSystem StatisticsDataReferenceCleaner cleanUp
[ https://issues.apache.org/jira/browse/HADOOP-17728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17728: Summary: Deadlock in FileSystem StatisticsDataReferenceCleaner cleanUp (was: Fix issue of the StatisticsDataReferenceCleaner cleanUp) > Deadlock in FileSystem StatisticsDataReferenceCleaner cleanUp > - > > Key: HADOOP-17728 > URL: https://issues.apache.org/jira/browse/HADOOP-17728 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 3.2.1 >Reporter: yikf >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > Cleaner thread will be blocked if we remove reference from ReferenceQueue > unless the `queue.enqueue` called. > > As shown below, We call ReferenceQueue.remove() now while cleanUp, Call > chain as follow: > *StatisticsDataReferenceCleaner#queue.remove() -> > ReferenceQueue.remove(0) -> lock.wait(0)* > But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread > will be blocked. > > ThreadDump: > {code:java} > "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 > nid=0x2119 in Object.wait() [0x7f7b0023] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock) > at java.lang.Object.wait(Object.java:502) > at java.lang.ref.Reference.tryHandlePending(Reference.java:191) > - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock) > at > java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-17728) Fix issue of the StatisticsDataReferenceCleaner cleanUp
[ https://issues.apache.org/jira/browse/HADOOP-17728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran moved HDFS-16033 to HADOOP-17728: Component/s: (was: hdfs) fs Key: HADOOP-17728 (was: HDFS-16033) Affects Version/s: (was: 3.2.1) 3.2.1 Project: Hadoop Common (was: Hadoop HDFS) > Fix issue of the StatisticsDataReferenceCleaner cleanUp > --- > > Key: HADOOP-17728 > URL: https://issues.apache.org/jira/browse/HADOOP-17728 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 3.2.1 >Reporter: yikf >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > Cleaner thread will be blocked if we remove reference from ReferenceQueue > unless the `queue.enqueue` called. > > As shown below, We call ReferenceQueue.remove() now while cleanUp, Call > chain as follow: > *StatisticsDataReferenceCleaner#queue.remove() -> > ReferenceQueue.remove(0) -> lock.wait(0)* > But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread > will be blocked. > > ThreadDump: > {code:java} > "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 > nid=0x2119 in Object.wait() [0x7f7b0023] >java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock) > at java.lang.Object.wait(Object.java:502) > at java.lang.ref.Reference.tryHandlePending(Reference.java:191) > - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock) > at > java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org