[
https://issues.apache.org/jira/browse/HDFS-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15755358#comment-15755358
]
Chris Douglas commented on HDFS-6984:
-------------------------------------
bq. So for the Hive usecase, they would have to pass around a full serialized
FileStatus even though open() only needs the PathHandle field? This API seems
fine for same-process usage (HDFS-9806?) but inefficient for cross-process. I
think UNIX users are also used to the idea of an inode id separate from a file
status.
Sure, but this wasn't what I was trying to get at. On its efficiency:
# The overhead at the NN is significant, since it's often the cluster
bottleneck. If we're returning information that's immediately discarded, the
client-side inefficiency of not-immediately-discarding is not irrelevant, but
it's not significant. For a few hundred {{FileStatus}} records, this overhead
is less than 1MB, and it's often available for GC (most {{FileStatus}} objects
are short-lived).
# For cases where we want to manage thousands or millions of FileStatus
instances, this overhead may become significant. But we can compress repeated
data, and most collections of {{FileStatus}} objects are mostly repeated data.
Further, most of these fields are optional, and if an application wants to
transfer only necessary fields (e.g., the Path and PathHandle, omitting
everything else), that's fine.
# I share your caution w.r.t. the API surface, which is why HDFS-7878 avoids
adding lots of new calls accepting {{PathHandle}} objects. Users of
{{FileSystem}} almost always intend to refer to the entity returned by the last
query, not whatever happens to exist at that path now.
In exchange for non-optimal record size, the API gains some convenient symmetry
(i.e., get a FileStatus, use a FileStatus) that at least makes it possible to
avoid the TOCTOU races that are uncommon, but annoying. The serialization can
omit fields. We can use generic container formats that understand PB to
compress collections of {{FileStatus}} objects. We can even use a more generic
type ({{FileStatus}}) if the specific type follows some PB conventions.
Considering the cost of these, the distance from "optimal" seems well within
the ambient noise.
bq. I still lean toward removing Writable altogether, since it reduces our API
surface. Similarly, I'd rather not open up HdfsFileStatus as a public API
(without a concrete usecase) since it expands our API surface.
Again, I'm mostly ambivalent about {{Writable}}. [[email protected]]?
Preserving the automatic serialization that some user programs may rely on...
we can try removing it in 3.x-alpha/beta, and see if anyone complains. If I
moved this to a library, would that move this forward?
I didn't mean to suggest that {{HdfsFileStatus}} should be a public API (with
all the restrictions on evolving it).
bq. The cross-serialization is also fragile since we need to be careful not to
reuse field numbers across two structures, and I've seen numbering mistakes
made before even for normal PB changes.
Granted, but a hole in the PB toolchain doesn't mean we shouldn't use the
feature. We can add more comments to the .proto. Reading through HDFS-6326,
perhaps making {{FsPermission}} a protobuf record would help.
> In Hadoop 3, make FileStatus serialize itself via protobuf
> ----------------------------------------------------------
>
> Key: HDFS-6984
> URL: https://issues.apache.org/jira/browse/HDFS-6984
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 3.0.0-alpha1
> Reporter: Colin P. McCabe
> Assignee: Colin P. McCabe
> Labels: BB2015-05-TBR
> Attachments: HDFS-6984.001.patch, HDFS-6984.002.patch,
> HDFS-6984.003.patch, HDFS-6984.nowritable.patch
>
>
> FileStatus was a Writable in Hadoop 2 and earlier. Originally, we used this
> to serialize it and send it over the wire. But in Hadoop 2 and later, we
> have the protobuf {{HdfsFileStatusProto}} which serves to serialize this
> information. The protobuf form is preferable, since it allows us to add new
> fields in a backwards-compatible way. Another issue is that already a lot of
> subclasses of FileStatus don't override the Writable methods of the
> superclass, breaking the interface contract that read(status.write) should be
> equal to the original status.
> In Hadoop 3, we should just make FileStatus serialize itself via protobuf so
> that we don't have to deal with these issues. It's probably too late to do
> this in Hadoop 2, since user code may be relying on the existing FileStatus
> serialization there.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]