[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

youlong chen updated MAPREDUCE-7496:
------------------------------------
    Description: 
Title: FrameworkUploader throws NoSuchElementException despite successful file 
upload

Priority: Major
Component: mapred
Affects Version/s: 3.1.1

Description:
The mapred FrameworkUploader tool throws a NoSuchElementException when 
uploading a file to HDFS, even though the file is successfully uploaded with 
the specified replication factor.

Steps to reproduce:
1. Execute the following command:
{code}
$HADOOP_PREFIX/bin/mapred frameworkuploader \
    -input /test/test.jar \
    -fs hdfs://namenode:9000 \
    -target hdfs:///test.jar#test_test \
    -initialReplication 3 \
    -finalReplication 10 \
    -acceptableReplication 9 \
    -timeout 60
{code}

Current behavior:
1. The file is successfully uploaded to HDFS with the specified replication 
factor (10)
2. The tool throws a NoSuchElementException:
{code}
Exception in thread "main" java.util.NoSuchElementException
        at java.util.HashMap$HashIterator.nextNode(HashMap.java:1471)
        at java.util.HashMap$ValueIterator.next(HashMap.java:1498)
        at java.util.Collections.min(Collections.java:598)
        at 
org.apache.hadoop.mapred.uploader.FrameworkUploader.getSmallestReplicatedBlockCount(FrameworkUploader.java:243)
        at 
org.apache.hadoop.mapred.uploader.FrameworkUploader.endUpload(FrameworkUploader.java:260)
        at 
org.apache.hadoop.mapred.uploader.FrameworkUploader.buildPackage(FrameworkUploader.java:293)
        at 
org.apache.hadoop.mapred.uploader.FrameworkUploader.run(FrameworkUploader.java:117)
        at 
org.apache.hadoop.mapred.uploader.FrameworkUploader.main(FrameworkUploader.java:560)
{code}

Expected behavior:
The tool should complete successfully without throwing any exception since the 
file upload and replication operations were successful.

Analysis:
The exception occurs in getSmallestReplicatedBlockCount() method when trying to 
find the minimum replication count. This suggests that the block replication 
tracking map might be empty when it shouldn't be, causing the Collections.min() 
operation to fail.

Impact:
While the file upload operation succeeds, the exception prevents the tool from 
completing normally and could cause issues in automated workflows that depend 
on the tool's exit status.

Suggested Fix:
Add proper null checking and empty collection handling in 
getSmallestReplicatedBlockCount() method to prevent the NoSuchElementException.


  was:Description: The mapred FrameworkUploader tool throws a 
NoSuchElementException when uploading a file to HDFS, even though the file is 
successfully uploaded with the specified replication factor. Steps to 
reproduce: 1. Execute the following command: \{code} $HADOOP_PREFIX/bin/mapred 
frameworkuploader \ -input /test/test.jar \ -fs hdfs://namenode:9000 \ -target 
hdfs:///test.jar#test_test \ -initialReplication 3 \ -finalReplication 10 \ 
-acceptableReplication 9 \ -timeout 60 \{code} Current behavior: 1. The file is 
successfully uploaded to HDFS with the specified replication factor (10) 2. The 
tool throws a NoSuchElementException: \{code} Exception in thread "main" 
java.util.NoSuchElementException at 
java.util.HashMap$HashIterator.nextNode(HashMap.java:1471) at 
java.util.HashMap$ValueIterator.next(HashMap.java:1498) at 
java.util.Collections.min(Collections.java:598) at 
org.apache.hadoop.mapred.uploader.FrameworkUploader.getSmallestReplicatedBlockCount(FrameworkUploader.java:243)
 at 
org.apache.hadoop.mapred.uploader.FrameworkUploader.endUpload(FrameworkUploader.java:260)
 at 
org.apache.hadoop.mapred.uploader.FrameworkUploader.buildPackage(FrameworkUploader.java:293)
 at 
org.apache.hadoop.mapred.uploader.FrameworkUploader.run(FrameworkUploader.java:117)
 at 
org.apache.hadoop.mapred.uploader.FrameworkUploader.main(FrameworkUploader.java:560)
 \{code} Expected behavior: The tool should complete successfully without 
throwing any exception since the file upload and replication operations were 
successful. Analysis: The exception occurs in getSmallestReplicatedBlockCount() 
method when trying to find the minimum replication count. This suggests that 
the block replication tracking map might be empty when it shouldn't be, causing 
the Collections.min() operation to fail. Impact: While the file upload 
operation succeeds, the exception prevents the tool from completing normally 
and could cause issues in automated workflows that depend on the tool's exit 
status. Suggested Fix: Add proper null checking and empty collection handling 
in getSmallestReplicatedBlockCount() method to prevent the 
NoSuchElementException.


> FrameworkUploader throws NoSuchElementException despite successful file upload
> ------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-7496
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7496
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 3.4.1
>            Reporter: youlong chen
>            Priority: Major
>
> Title: FrameworkUploader throws NoSuchElementException despite successful 
> file upload
> Priority: Major
> Component: mapred
> Affects Version/s: 3.1.1
> Description:
> The mapred FrameworkUploader tool throws a NoSuchElementException when 
> uploading a file to HDFS, even though the file is successfully uploaded with 
> the specified replication factor.
> Steps to reproduce:
> 1. Execute the following command:
> {code}
> $HADOOP_PREFIX/bin/mapred frameworkuploader \
>     -input /test/test.jar \
>     -fs hdfs://namenode:9000 \
>     -target hdfs:///test.jar#test_test \
>     -initialReplication 3 \
>     -finalReplication 10 \
>     -acceptableReplication 9 \
>     -timeout 60
> {code}
> Current behavior:
> 1. The file is successfully uploaded to HDFS with the specified replication 
> factor (10)
> 2. The tool throws a NoSuchElementException:
> {code}
> Exception in thread "main" java.util.NoSuchElementException
>         at java.util.HashMap$HashIterator.nextNode(HashMap.java:1471)
>         at java.util.HashMap$ValueIterator.next(HashMap.java:1498)
>         at java.util.Collections.min(Collections.java:598)
>         at 
> org.apache.hadoop.mapred.uploader.FrameworkUploader.getSmallestReplicatedBlockCount(FrameworkUploader.java:243)
>         at 
> org.apache.hadoop.mapred.uploader.FrameworkUploader.endUpload(FrameworkUploader.java:260)
>         at 
> org.apache.hadoop.mapred.uploader.FrameworkUploader.buildPackage(FrameworkUploader.java:293)
>         at 
> org.apache.hadoop.mapred.uploader.FrameworkUploader.run(FrameworkUploader.java:117)
>         at 
> org.apache.hadoop.mapred.uploader.FrameworkUploader.main(FrameworkUploader.java:560)
> {code}
> Expected behavior:
> The tool should complete successfully without throwing any exception since 
> the file upload and replication operations were successful.
> Analysis:
> The exception occurs in getSmallestReplicatedBlockCount() method when trying 
> to find the minimum replication count. This suggests that the block 
> replication tracking map might be empty when it shouldn't be, causing the 
> Collections.min() operation to fail.
> Impact:
> While the file upload operation succeeds, the exception prevents the tool 
> from completing normally and could cause issues in automated workflows that 
> depend on the tool's exit status.
> Suggested Fix:
> Add proper null checking and empty collection handling in 
> getSmallestReplicatedBlockCount() method to prevent the 
> NoSuchElementException.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org

Reply via email to