[ 
https://issues.apache.org/jira/browse/HADOOP-18726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17718426#comment-17718426
 ] 

ASF GitHub Bot commented on HADOOP-18726:
-----------------------------------------

hadoop-yetus commented on PR #5612:
URL: https://github.com/apache/hadoop/pull/5612#issuecomment-1530888344

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  19m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 44s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  88m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5612/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5612 |
   | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets 
shellcheck shelldocs |
   | uname | Linux 0ba13099fc75 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7bcd58a297363a51d005b50b32f6d7abd752e2b9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5612/1/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5612/1/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Set the locale to avoid printing useless logs.
> ----------------------------------------------
>
>                 Key: HADOOP-18726
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18726
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Shuyan Zhang
>            Assignee: Shuyan Zhang
>            Priority: Major
>              Labels: pull-request-available
>
> In our production environment, if the hadoop process is started in a 
> non-English environment, many unexpected error logs will be printed. The 
> following is the error message printed by datanode.
> ```
> 2023-05-01 09:10:50,299 ERROR 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op 
> transferToSocketFully : 断开的管道
> 2023-05-01 09:10:50,299 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks() 
> exception: 
> java.io.IOException: 断开的管道
>         at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
>         at 
> sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
>         at 
> sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
>         at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
>         at 
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
>         at 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
>         at java.lang.Thread.run(Thread.java:745)
> 2023-05-01 09:10:50,298 ERROR 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op 
> transferToSocketFully : 断开的管道
> 2023-05-01 09:10:50,298 ERROR 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op 
> transferToSocketFully : 断开的管道
> 2023-05-01 09:10:50,298 ERROR 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op 
> transferToSocketFully : 断开的管道
> 2023-05-01 09:10:50,298 ERROR 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op 
> transferToSocketFully : 断开的管道
> 2023-05-01 09:10:50,302 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks() 
> exception: 
> java.io.IOException: 断开的管道
>         at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
>         at 
> sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
>         at 
> sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
>         at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
>         at 
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
>         at 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
>         at java.lang.Thread.run(Thread.java:745)
> 2023-05-01 09:10:50,303 ERROR 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op 
> transferToSocketFully : 断开的管道
> 2023-05-01 09:10:50,303 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks() 
> exception: 
> java.io.IOException: 断开的管道
>         at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
>         at 
> sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
>         at 
> sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
>         at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
>         at 
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
>         at 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:568)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
>         at java.lang.Thread.run(Thread.java:745)
> ```
> The reason for this situation is that the code uses the message of 
> IOException to determine whether to print Exception logs, but different 
> locales will change the content of the message.
> This large number of error logs is very misleading, so this patch sets the 
> environment variable LANG in hadoop-env.sh.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to