[jira] [Created] (HDFS-13869) Handle NPE for NamenodeBeanMetrics#getFederationMetrics()
Surendra Singh Lilhore created HDFS-13869: - Summary: Handle NPE for NamenodeBeanMetrics#getFederationMetrics() Key: HDFS-13869 URL: https://issues.apache.org/jira/browse/HDFS-13869 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 3.0.0 Reporter: Surendra Singh Lilhore {code:java} Caused by: java.lang.NullPointerException at org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getUsed(NamenodeBeanMetrics.java:205) at org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getCapacityUsed(NamenodeBeanMetrics.java:519) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){code} ngMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-376) Create custom message structure for use in AuditLogging
Dinesh Chitlangia created HDDS-376: -- Summary: Create custom message structure for use in AuditLogging Key: HDDS-376 URL: https://issues.apache.org/jira/browse/HDDS-376 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Dinesh Chitlangia Assignee: Dinesh Chitlangia In HDDS-198 we introduced a framework for AuditLogging in Ozone. We had used StructuredDataMessage for formatting the messages to be logged. Based on discussion with [~jnp] and [~anu], this Jira proposes to create a custom message structure to generate audit messages in the following format: user=xxx ip=xxx op=_ \{key=val, key1=val1..} ret=XX -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.
Siyao Meng created HDFS-13868: - Summary: WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not. Key: HDFS-13868 URL: https://issues.apache.org/jira/browse/HDFS-13868 Project: Hadoop HDFS Issue Type: Bug Components: webhdfs Reporter: Siyao Meng Proof: {code:java} # Bash $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3" # Note that I intentionally typed the wrong parameter name for "oldsnapshotname" above to cause NPE. {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3" {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} # OR $ curl "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3" {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13867) Add Validation for max arguments for Router admin ls,clrQuota,setQuota,rm and nameservice commands
Ayush Saxena created HDFS-13867: --- Summary: Add Validation for max arguments for Router admin ls,clrQuota,setQuota,rm and nameservice commands Key: HDFS-13867 URL: https://issues.apache.org/jira/browse/HDFS-13867 Project: Hadoop HDFS Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Add validation to check if the total number of arguments provided for the Router Admin commands are not more than max possible.In most cases if there are some non related extra parameters after the required arguments it doesn't validate against this but instead perform the action with the required parameters and ignore the extra ones which shouldn't be in the ideal case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-375) Update ContainerReportHandler to not send events for open containers
Ajay Kumar created HDDS-375: --- Summary: Update ContainerReportHandler to not send events for open containers Key: HDDS-375 URL: https://issues.apache.org/jira/browse/HDDS-375 Project: Hadoop Distributed Data Store Issue Type: New Feature Reporter: Ajay Kumar -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13866) Reset exitcode to -1 for when inval params are inputed
Ranith Sardar created HDFS-13866: Summary: Reset exitcode to -1 for when inval params are inputed Key: HDFS-13866 URL: https://issues.apache.org/jira/browse/HDFS-13866 Project: Hadoop HDFS Issue Type: Bug Reporter: Ranith Sardar -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13865) Improve fuse-dfs testing and dev docs
Gabor Bota created HDFS-13865: - Summary: Improve fuse-dfs testing and dev docs Key: HDFS-13865 URL: https://issues.apache.org/jira/browse/HDFS-13865 Project: Hadoop HDFS Issue Type: Improvement Components: fuse-dfs Reporter: Gabor Bota We had a bug (by customer escalation) in fuse-dfs and we provided a fix. Testing the patch was not straightforward, and we found no easy way to do that. I file this jira for the following: * Improve {{hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/doc/README}} with information on how to run native tests for fuse-dfs * Describe what is {{hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/test/TestFuseDFS.java}} and how to run it. * Resolve todos in {{hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/test/fuse_workload.c}} to add more test coverage: {noformat} // TODO: implement/test access, mknod, symlink // TODO: rmdir on non-dir, non-empty dir // TODO: test unlink-during-writing // TODO: test weird open flags failing // TODO: test chown, chmod {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13864) Service for FSNamesystem#clearCorruptLazyPersistFiles to iterate with writeLock
Gabor Bota created HDFS-13864: - Summary: Service for FSNamesystem#clearCorruptLazyPersistFiles to iterate with writeLock Key: HDFS-13864 URL: https://issues.apache.org/jira/browse/HDFS-13864 Project: Hadoop HDFS Issue Type: Improvement Reporter: Gabor Bota In HDFS-13672 we agreed that the current implementation could be changed, but not in a way that it was addressed in that issue. This jira is a follow-up for HDFS-13672. As a workaround, we can disable the scrubber interval when debugging. In the real world/customer environments, there are no cases when there are so many corrupted lazy persist files. We agreed that * holding the lock for a long time is an anti-pattern * the common case here is that there are zero lazy persist files, a better (though different) change would be to skip running this scrubber entirely if there aren't any lazy persist files We had the following ideas: * create a service where you can iterate through a list of elements with a gained writeLock - and each element can be run through a lambda function. * What we need here is a tail iterator that starts at the last processed element. * Open question: should we disable the {{clearCorruptLazyPersistFiles}} by default? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/ [Aug 23, 2018 4:35:43 AM] (sunilg) YARN-8015. Support all types of placement constraint support for [Aug 23, 2018 12:54:38 PM] (tasanuma) HADOOP-14314. The OpenSolaris taxonomy link is dead in [Aug 23, 2018 2:29:46 PM] (jlowe) YARN-8649. NPE in localizer hearbeat processing if a container is killed [Aug 23, 2018 6:30:28 PM] (xyao) HDDS-328. Support export and import of the KeyValueContainer. -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine Unread field:FSBasedSubmarineStorageImpl.java:[line 39] Found reliance on default encoding in org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component):in org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component): new java.io.FileWriter(File) At YarnServiceJobSubmitter.java:[line 192] org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component) may fail to clean up java.io.Writer on checked exception Obligation to clean up resource created at YarnServiceJobSubmitter.java:to clean up java.io.Writer on checked exception Obligation to clean up resource created at YarnServiceJobSubmitter.java:[line 192] is not discharged org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String, int, String) concatenates strings using + in a loop At YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] Failed CTEST tests : test_test_libhdfs_threaded_hdfs_static test_libhdfs_threaded_hdfspp_test_shim_static Failed junit tests : hadoop.hdfs.client.impl.TestBlockReaderLocal hadoop.hdfs.server.datanode.TestIncrementalBlockReports hadoop.hdfs.TestLeaseRecovery2 hadoop.yarn.server.nodemanager.containermanager.TestContainerManager hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart hadoop.yarn.client.api.impl.TestAMRMProxy hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestMRTimelineEventHandling hadoop.yarn.service.TestServiceAM hadoop.yarn.sls.TestSLSRunner cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/artifact/out/diff-compile-javac-root.txt [328K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/artifact/out/diff-checkstyle-root.txt [17M] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/artifact/out/whitespace-eol.txt [9.4M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/artifact/out/xml.txt [4.0K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/artifact/out/branch-findbugs-hadoop-hdds_client.txt [72K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt [60K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/878/artifact/out/branch-findbugs-hadoop-hdds_framework.txt [8.0K]
[jira] [Resolved] (HDDS-276) Fix symbolic link creation during Ozone dist process
[ https://issues.apache.org/jira/browse/HDDS-276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton resolved HDDS-276. --- Resolution: Won't Fix Will be fixed by HDDS-280 > Fix symbolic link creation during Ozone dist process > > > Key: HDDS-276 > URL: https://issues.apache.org/jira/browse/HDDS-276 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Affects Versions: 0.2.1 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Minor > Fix For: 0.2.1 > > Attachments: HDDS-276.001.patch > > > Ozone is creating a symlink during the dist process. > Using the "ozone" directory as a destination name all the docker-based > acceptance tests and docker-compose files are more simple as they don't need > to have the version information in the path. > But to keep the version specific folder name in the tar file we create a > symbolic link during the tar creation. With the symbolic link and the > '–dereference' tar argument we can create the tar file which includes a > versioned directory (ozone-0.2.1) but we can use the a dist directory without > the version in the name (hadoop-dist/target/ozone). > Currently this symlink creation has an issue. It couldn't be run twice. You > need to do a 'mvn clean' before you can create a new dist. > But fortunately this could be fixed easily by checking if the destination > symlink exists. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-374) Support to configure container size in units lesser than GB
Nanda kumar created HDDS-374: Summary: Support to configure container size in units lesser than GB Key: HDDS-374 URL: https://issues.apache.org/jira/browse/HDDS-374 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: Ozone Datanode Reporter: Nanda kumar Assignee: Nanda kumar Fix For: 0.2.1 After HDDS-317 we can configure the container size with its unit (eg. 5gb, 1000mb etc.). But we still require it to be in multiples of GB, the configured value will be rounded off (floor) to the nearest GB value. It will be helpful to have support for units lesser than GB, it will make our life simpler while writing unit tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64
For more details, see https://builds.apache.org/job/hadoop-trunk-win/568/ [Aug 23, 2018 8:10:44 PM] (aw) YETUS-660. checkstyle should report when it fails to execute (addendum) [Aug 23, 2018 11:48:02 PM] (aw) YETUS-591. Match git SHA1 with github pull request # [Aug 24, 2018 12:05:44 AM] (aw) YETUS-640. add hadolint support [Aug 23, 2018 12:54:38 PM] (tasanuma) HADOOP-14314. The OpenSolaris taxonomy link is dead in [Aug 23, 2018 2:29:46 PM] (jlowe) YARN-8649. NPE in localizer hearbeat processing if a container is killed [Aug 23, 2018 6:30:28 PM] (xyao) HDDS-328. Support export and import of the KeyValueContainer. [Aug 24, 2018 2:44:57 AM] (surendralilhore) HDFS-13805. Journal Nodes should allow to format non-empty directories [Aug 24, 2018 11:56:30 AM] (elek) HDDS-317. Use new StorageSize API for reading ERROR: File 'out/email-report.txt' does not exist - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-13855) RBF: Router WebUI cannot display capacity and DN exactly when nameservice all in Federation Cluster
[ https://issues.apache.org/jira/browse/HDFS-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yanghuafeng resolved HDFS-13855. Resolution: Duplicate > RBF: Router WebUI cannot display capacity and DN exactly when nameservice all > in Federation Cluster > --- > > Key: HDFS-13855 > URL: https://issues.apache.org/jira/browse/HDFS-13855 > Project: Hadoop HDFS > Issue Type: Bug > Components: federation, hdfs >Affects Versions: 3.0.0, 3.1.0, 2.9.1 >Reporter: yanghuafeng >Assignee: yanghuafeng >Priority: Major > > Now the FederationMetrics aggregate the capacity and dn from different > nameservice. But it is not correct to aggregate when all the nameservice from > the same Federation Cluster. We should only display one nameservice > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13863) FsDatasetImpl should log DiskOutOfSpaceException
Fei Hui created HDFS-13863: -- Summary: FsDatasetImpl should log DiskOutOfSpaceException Key: HDFS-13863 URL: https://issues.apache.org/jira/browse/HDFS-13863 Project: Hadoop HDFS Issue Type: Bug Components: hdfs Affects Versions: 3.0.3, 2.9.1, 3.1.0 Reporter: Fei Hui Assignee: Fei Hui The code in function *createRbw* as follow {code:java} try { // First try to place the block on a transient volume. ref = volumes.getNextTransientVolume(b.getNumBytes()); datanode.getMetrics().incrRamDiskBlocksWrite(); } catch (DiskOutOfSpaceException de) { // Ignore the exception since we just fall back to persistent storage. } finally { if (ref == null) { cacheManager.release(b.getNumBytes()); } } {code} I think we should log the exception because it took me long time to resolve problems, and maybe others face the same problems. When i test ram_disk, i found no data was written into randomdisk. I debug, deep into the source code, and found that randomdisk size was less than reserved space. I think if message was logged, i would resolve the problem quickly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands
Soumyapn created HDFS-13862: --- Summary: RBF: Router logs are not capturing few of the dfsrouteradmin commands Key: HDFS-13862 URL: https://issues.apache.org/jira/browse/HDFS-13862 Project: Hadoop HDFS Issue Type: Bug Reporter: Soumyapn Test Steps : Below commands are not getting captured in the Router logs. # Destination entry name in the add command. Log says "Added new mount point /apps9 to resolver". # Safemode enter|leave|get commands # nameservice enable -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-373) Ozone genconf tool must generate ozone-site.xml with sample values instead of a template
Dinesh Chitlangia created HDDS-373: -- Summary: Ozone genconf tool must generate ozone-site.xml with sample values instead of a template Key: HDDS-373 URL: https://issues.apache.org/jira/browse/HDDS-373 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: Tools Reporter: Dinesh Chitlangia Assignee: Dinesh Chitlangia Currently, the genconf tool generates a template ozone-site.xml. This is not very useful for new users as they would have to understand what values should be set for the minimal configuration properties. This Jira proposes to modify the ozone-default.xml which is leveraged by genconf tool to generate ozone-site.xml -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org