[ 
https://issues.apache.org/jira/browse/HDFS-7011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14126375#comment-14126375
 ] 

Colin Patrick McCabe commented on HDFS-7011:
--------------------------------------------

{code}
+namespace Hdfs {
+namespace Internal {
{code}

Don't capitalize the first letter of namespaces.  Also, I'd prefer 'namespace 
ndfs' or at least 'namespace hdfs3' to distinguish this from the 
already-existing libhdfs.

C++ files should be .cc, not .cpp, to be consistent with the rest of the Hadoop 
C++ code.

We need the apache license header at the top of all files.

{code}
+const char * AlreadyBeingCreatedException::ReflexName =
+    "org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException";
{code}

These should be {{const char * const}} (const point to a memory location that 
is itself const) so that the compiler can put them into a read-only segment of 
the library.

{{hadoop-hdfs-project/hadoop-hdfs/src/contrib/libhdfs3/src/common/Memory.h}}:  
boost isn't needed for {{shared_ptr}}... we should use 
{{std::tr1::shared_ptr}}, which is available on all compilers made in the last 
15 or so years. (including the one on CentOS 5)

{code}
+#ifndef DEFAULT_STACK_PREFIX
+#define DEFAULT_STACK_PREFIX "\t@\t"
+#endif
{code}
We don't need this ifndef, because this is protected by the header file's 
ifndef.

{code}
+sigset_t ThreadBlockSignal() {
+    sigset_t sigs;
+    sigset_t oldMask;
+    sigemptyset(&sigs);
+    sigaddset(&sigs, SIGHUP);
+    sigaddset(&sigs, SIGINT);
+    sigaddset(&sigs, SIGTERM);
+    sigaddset(&sigs, SIGUSR1);
+    sigaddset(&sigs, SIGUSR2);
+    pthread_sigmask(SIG_BLOCK, &sigs, &oldMask);
+    return oldMask;
+}
{code}

I don't see why we should block these signals?  Also, you should block SIGPIPE. 
 SIGPIPE is a little special in that it will not be raised at all if the thread 
which had the broken pipe had the signal blocked.

> Implement basic utilities for libhdfs3
> --------------------------------------
>
>                 Key: HDFS-7011
>                 URL: https://issues.apache.org/jira/browse/HDFS-7011
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>            Reporter: Zhanwei Wang
>         Attachments: HDFS-7011.patch
>
>
> Implement basic utilities such as hash, exception handling, logger, configure 
> parser, checksum calculate and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to