[
https://issues.apache.org/jira/browse/HDFS-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiaowei Zhu reassigned HDFS-8790:
---------------------------------
Assignee: Xiaowei Zhu (was: James Clampffer)
> Add Filesystem level stress tests
> ---------------------------------
>
> Key: HDFS-8790
> URL: https://issues.apache.org/jira/browse/HDFS-8790
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: hdfs-client
> Reporter: James Clampffer
> Assignee: Xiaowei Zhu
> Attachments: HDFS-8790.HDFS-8707.000.patch
>
>
> I propose adding stress tests on libhdfs(3) compatibility layer was well as
> the async calls. These can be also used for basic performance metrics and
> inputs to profiling tools to see improvements over time.
> I'd like to make these tests into a seperate executable, or set of them, so
> that they can be used for longer running tests on dedicated clusters that may
> already exist. Each should provide a simple command line interface for
> scripted or manual use.
> Basic tests would be:
> looped open-read-close
> sequential scans
> small random reads
> All tests will be parameterized for number of threads, read size, and upper
> and lower offset bounds for a specified file. This will make it much easier
> to detect and reproduce threading issues and resource leaks as well as
> provide a simple executable (or set of executables) that can be run with
> valgrind to gain a high confidence that the code is operating correctly.
> I'd appreciate suggestions for any other simple stress tests.
> HDFS-8766 intentionally avoided shared_ptr and unique_ptr in the C api to
> make debugging this a little easier in case memory stomps and dangling
> references show up in stress tests. These will be added into the C API when
> the patch for this jira is submitted because things should be reasonably
> stable once the stress tests pass.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]