[
https://issues.apache.org/jira/browse/HDFS-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Anatoli Shein updated HDFS-11807:
---------------------------------
Attachment: HDFS-11807.HDFS-8707.007.patch
Retrying since yetus failed.
> libhdfs++: Get minidfscluster tests running under valgrind
> ----------------------------------------------------------
>
> Key: HDFS-11807
> URL: https://issues.apache.org/jira/browse/HDFS-11807
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: hdfs-client
> Reporter: James Clampffer
> Assignee: Anatoli Shein
> Attachments: HDFS-11807.HDFS-8707.000.patch,
> HDFS-11807.HDFS-8707.001.patch, HDFS-11807.HDFS-8707.002.patch,
> HDFS-11807.HDFS-8707.003.patch, HDFS-11807.HDFS-8707.004.patch,
> HDFS-11807.HDFS-8707.005.patch, HDFS-11807.HDFS-8707.006.patch,
> HDFS-11807.HDFS-8707.007.patch
>
>
> The gmock based unit tests generally don't expose race conditions and memory
> stomps. A good way to expose these is running libhdfs++ stress tests and
> tools under valgrind and pointing them at a real cluster. Right now the CI
> tools don't do that so bugs occasionally slip in and aren't caught until they
> cause trouble in applications that use libhdfs++ for HDFS access.
> The reason the minidfscluster tests don't run under valgrind is because the
> GC and JIT compiler in the embedded JVM do things that look like errors to
> valgrind. I'd like to have these tests do some basic setup and then fork
> into two processes: one for the minidfscluster stuff and one for the
> libhdfs++ client test. A small amount of shared memory can be used to
> provide a place for the minidfscluster to stick the hdfsBuilder object that
> the client needs to get info about which port to connect to. Can also stick
> a condition variable there to let the minidfscluster know when it can shut
> down.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]