[ 
https://issues.apache.org/jira/browse/HDFS-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786806#comment-16786806
 ] 

Steve Loughran commented on HDFS-3246:
--------------------------------------

bq. IMO, we should support ByteBuffer read & pread in all FileSystem ?

better: HBase handles not having this. 

We looked in HADOOP-14603 at adding the API To a different FS (here, the S3A 
connector). Unless the underlying libraries that the store uses support this, 
then its a lot of work in the connectors -and for what? Because it isn't going 
to delver the speedups you expect.

Better: if a stream offers ByteBufferReadable, it is declaring that it offers 
an efficient way to copy data. If a stream does not, then it is declaring that 
it doesn't. 

We can add a StreamCapabilities probe so you can check to see if a stream 
supports it without having to make an API Call which goes on to fail -this will 
let you check earlier. Would that help?

> pRead equivalent for direct read path
> -------------------------------------
>
>                 Key: HDFS-3246
>                 URL: https://issues.apache.org/jira/browse/HDFS-3246
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs-client, performance
>    Affects Versions: 3.0.0-alpha1
>            Reporter: Henry Robinson
>            Assignee: Sahil Takiar
>            Priority: Major
>         Attachments: HDFS-3246.001.patch, HDFS-3246.002.patch, 
> HDFS-3246.003.patch, HDFS-3246.004.patch, HDFS-3246.005.patch, 
> HDFS-3246.006.patch
>
>
> There is no pread equivalent in ByteBufferReadable. We should consider adding 
> one. It would be relatively easy to implement for the distributed case 
> (certainly compared to HDFS-2834), since DFSInputStream does most of the 
> heavy lifting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to