[ 
https://issues.apache.org/jira/browse/HDFS-16468?focusedWorklogId=762603&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762603
 ]

ASF GitHub Bot logged work on HDFS-16468:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 26/Apr/22 22:38
            Start Date: 26/Apr/22 22:38
    Worklog Time Spent: 10m 
      Work Description: goiri commented on code in PR #4228:
URL: https://github.com/apache/hadoop/pull/4228#discussion_r859200993


##########
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/cc/cat/cat.cc:
##########
@@ -62,7 +62,6 @@ int main(int argc, char *argv[]) {
   //wrapping file_raw into a unique pointer to guarantee deletion
   std::unique_ptr<hdfs::FileHandle> file(file_raw);
 
-  ssize_t total_bytes_read = 0;

Review Comment:
   I would prefer to do the cleanup of this unused variables (including the one 
in the tools) separately.





Issue Time Tracking
-------------------

    Worklog Id:     (was: 762603)
    Time Spent: 3h 10m  (was: 3h)

> Define ssize_t for Windows
> --------------------------
>
>                 Key: HDFS-16468
>                 URL: https://issues.apache.org/jira/browse/HDFS-16468
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: libhdfs++
>    Affects Versions: 3.4.0
>         Environment: Windows 10
>            Reporter: Gautham Banasandra
>            Assignee: Gautham Banasandra
>            Priority: Major
>              Labels: libhdfscpp, pull-request-available
>          Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Some C/C++ files use *ssize_t* data type. This isn't available for Windows 
> and we need to define an alias for this and set it to *long long* to make it 
> cross platform compatible.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to