[ 
https://issues.apache.org/jira/browse/ARROW-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918810#comment-16918810
 ] 

Max Risuhin commented on ARROW-5995:
------------------------------------

Well, storing of small file under /test/test.txt hadoop path results in 
following internal hadoop files structure:

```

(base) 
max@ubuntu:/usr/local/hadoop/yarn_data/hdfs/datanode/current/BP-701933085-127.0.1.1-1566905807010/current/finalized/subdir0/subdir0$
 ls -l
total 8
-rw-rw-r-- 1 max max 15 Aug 29 10:35 blk_1073741843
-rw-rw-r-- 1 max max 11 Aug 29 10:35 blk_1073741843_1021.meta

```

 

Where "blk_1073741843" file contains content of "/test/test.txt". And "meta" 
seems contains some kind of not readable meta data :)

> [Python] pyarrow: hdfs: support file checksum
> ---------------------------------------------
>
>                 Key: ARROW-5995
>                 URL: https://issues.apache.org/jira/browse/ARROW-5995
>             Project: Apache Arrow
>          Issue Type: Improvement
>          Components: Python
>            Reporter: Ruslan Kuprieiev
>            Priority: Minor
>
> I was not able to find how to retrieve checksum (`getFileChecksum` or `hadoop 
> fs/dfs -checksum`) for a file on hdfs. Judging by how it is implemented in 
> hadoop CLI [1], looks like we will also need to implement it manually in 
> pyarrow. Please correct me if I'm missing something. Is this feature 
> desirable? Or was there a good reason why it wasn't implemented already?
>  [1] 
> [https://github.com/hanborq/hadoop/blob/hadoop-hdh3u2.1/src/hdfs/org/apache/hadoop/hdfs/DFSClient.java#L719]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to