[
https://issues.apache.org/jira/browse/HDFS-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17756008#comment-17756008
]
ASF GitHub Bot commented on HDFS-16644:
---------------------------------------
zhuzilong2013 opened a new pull request, #5962:
URL: https://github.com/apache/hadoop/pull/5962
### Description of PR
This change prevents qop values from being overwritten with illegal values.
JIRA: HDFS-16644
### How was this patch tested?
Test in a production environment
### For code changes:
- [ ] Does the title or this PR starts with the corresponding JIRA issue id
(e.g. 'HADOOP-17799. Your PR title ...')?
- [ ] Object storage: have the integration tests been executed and the
endpoint declared according to the connector-specific documentation?
- [ ] If adding new dependencies to the code, are these dependencies
licensed in a way that is compatible for inclusion under [ASF
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`,
`NOTICE-binary` files?
> java.io.IOException Invalid token in javax.security.sasl.qop
> ------------------------------------------------------------
>
> Key: HDFS-16644
> URL: https://issues.apache.org/jira/browse/HDFS-16644
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 3.2.1
> Reporter: Walter Su
> Priority: Major
>
> deployment:
> server side: kerberos enabled cluster with jdk 1.8 and hdfs-server 3.2.1
> client side:
> I run command hadoop fs -put a test file, with kerberos ticket inited first,
> and use identical core-site.xml & hdfs-site.xml configuration.
> using client ver 3.2.1, it succeeds.
> using client ver 2.8.5, it succeeds.
> using client ver 2.10.1, it fails. The client side error info is:
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient:
> SASL encryption trust check: localHostTrusted = false, remoteHostTrusted =
> false
> 2022-06-27 01:06:15,781 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DataNode{data=FSDataset{dirpath='[/mnt/disk1/hdfs, /mnt/***/hdfs,
> /mnt/***/hdfs, /mnt/***/hdfs]'}, localName='emr-worker-***.***:9866',
> datanodeUuid='b1c7f64a-6389-4739-bddf-***', xmitsInProgress=0}:Exception
> transfering block BP-1187699012-10.****-***:blk_1119803380_46080919 to mirror
> 10.*****:9866
> java.io.IOException: Invalid token in javax.security.sasl.qop: D
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
> Once any client ver 2.10.1 connect to hdfs server, the DataNode no longer
> accepts any client connection, even client ver 3.2.1 cannot connects to hdfs
> server. The DataNode rejects any client connection. For a short time, all
> DataNodes rejects client connections.
> The problem exists even if I replace DataNode with ver 3.3.0 or replace java
> with jdk 11.
> The problem is fixed if I replace DataNode with ver 3.2.0. I guess the
> problem is related to HDFS-13541
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]