[
https://issues.apache.org/jira/browse/HDFS-4403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554450#comment-13554450
]
Suresh Srinivas commented on HDFS-4403:
---------------------------------------
bq. So, in your example, with the old clients talking to a new server which
doesn't set crc type, the old clients would continue to use whatever default
they'd defined locally.
Makes sense. So the old client when the field is not set uses CHECKSUM_CRC32.
Is it possible that the server does not set the crcType and expects the client
to infer the type. The checksum happens to be something other than
CHECKSUM_CRC32. The old client treats it as CHECKSUM_CRC32 and runs into issues?
> DFSClient can infer checksum type when not provided by reading first byte
> -------------------------------------------------------------------------
>
> Key: HDFS-4403
> URL: https://issues.apache.org/jira/browse/HDFS-4403
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs-client
> Affects Versions: 2.0.2-alpha
> Reporter: Todd Lipcon
> Assignee: Todd Lipcon
> Priority: Minor
> Attachments: hdfs-4403.txt
>
>
> HDFS-3177 added the checksum type to OpBlockChecksumResponseProto, but the
> new protobuf field is optional, with a default of CRC32. This means that this
> API, when used against an older cluster (like earlier 0.23 releases) will
> falsely return CRC32 even if that cluster has written files with CRC32C. This
> can cause issues for distcp, for example.
> Instead of defaulting the protobuf field to CRC32, we can leave it with no
> default, and if the OpBlockChecksumResponseProto has no checksum type set,
> the client can send OP_READ_BLOCK to read the first byte of the block, then
> grab the checksum type out of that response (which has always been present)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira