Chris Douglas commented on HDFS-7878:

bq. Any changes to FileStatus has to be done so that external filesystems (e.g. 
google cloud storage) which subclass FileStatus don't break. I know, given the 
pain Guice causes us it'd be retaliation, but the GCS team aren't the guice 
team, and would upset users.

Of course. On HDFS-6984, I'm anxious changing the {{FileStatus}} serialization 
between 2.x and 3.x. Distcp is fine- that dependency is only intra-job- but if 
someone has a {{SequenceFile}} full of {{FileStatus}} objects in their 
clusters, those would become unreadable without a 2.x common jar. Whether it's 
worth it or not, we can discuss in HDFS-6984.

bq. FileStatus is part of the public FS API, documented in FileSystem.md. 
You're proposing changing it, aren't you? Which means you get to update the doc 
and the tests in AbstractContractGetFileStatusTest

Happily. Do you have feedback on this proposal, outside of its requirement to 
update the FS specification?

> API - expose an unique file identifier
> --------------------------------------
>                 Key: HDFS-7878
>                 URL: https://issues.apache.org/jira/browse/HDFS-7878
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Sergey Shelukhin
>            Assignee: Sergey Shelukhin
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, 
> HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, 
> HDFS-7878.06.patch, HDFS-7878.patch
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by 
> the JIRA it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be 
> derived from block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct 
> when file is overwritten.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to