[ 
https://issues.apache.org/jira/browse/HDFS-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16217316#comment-16217316
 ] 

James Clampffer commented on HDFS-11807:
----------------------------------------

Most recent change fixes the hang, thanks.

Some feedback:

-stdlib is now included twice
{code}
 #include <stdlib.h>
 #include <string.h>
+#include <stdlib.h>
{code}

-The EXPECT_EQ checks on return values are good, however in some cases if the 
EXPECT fails there's no chance the rest of the test will run correctly.  For 
example if one of the write() calls fail it'd be better to just exit the test 
then let it produce what might be a confusing error.  Easy fix would be to swap 
EXPECT_EQ with ASSERT_EQ for these cases.

-I'd avoid using the hardcoded path "tmp" when you write the file you're going 
to send with curl.  Instead check out what TempFile in configuration_test.h 
does to get a file name that's guaranteed to be unused.

-libhdfs also uses this test so we don't want to hard code the libhdfspp_ 
prefix on API functions since that means this could only ever test libhdfs++.  
You can most likely apply the same trick that the libhdfspp_wrapper shims use 
to add the appropriate prefixes during the build.  Alternatively I think you 
could build the binary to be used with valgrind with -DLIBHDFS_HDFS_H set so 
the shims aren't applied at all to avoid changing the calls.

-Might be worth checking the return value on snprintf to see if the string gets 
truncated.  The paths returned by the temp file mechanism might be long enough 
to spill over 200 chars.

> libhdfs++: Get minidfscluster tests running under valgrind
> ----------------------------------------------------------
>
>                 Key: HDFS-11807
>                 URL: https://issues.apache.org/jira/browse/HDFS-11807
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>            Reporter: James Clampffer
>            Assignee: Anatoli Shein
>         Attachments: HDFS-11807.HDFS-8707.000.patch, 
> HDFS-11807.HDFS-8707.001.patch, HDFS-11807.HDFS-8707.002.patch
>
>
> The gmock based unit tests generally don't expose race conditions and memory 
> stomps.  A good way to expose these is running libhdfs++ stress tests and 
> tools under valgrind and pointing them at a real cluster.  Right now the CI 
> tools don't do that so bugs occasionally slip in and aren't caught until they 
> cause trouble in applications that use libhdfs++ for HDFS access.
> The reason the minidfscluster tests don't run under valgrind is because the 
> GC and JIT compiler in the embedded JVM do things that look like errors to 
> valgrind.  I'd like to have these tests do some basic setup and then fork 
> into two processes: one for the minidfscluster stuff and one for the 
> libhdfs++ client test.  A small amount of shared memory can be used to 
> provide a place for the minidfscluster to stick the hdfsBuilder object that 
> the client needs to get info about which port to connect to.  Can also stick 
> a condition variable there to let the minidfscluster know when it can shut 
> down.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to