Apache Hadoop qbt Report: branch-3.3+JDK8 on Linux/x86_64

2022-10-20 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/78/

[Oct 13, 2022, 12:37:35 PM] (Steve Loughran) HADOOP-18292. Fix s3 select tests 
when running against unsupported storage class (#4489)
[Oct 14, 2022, 10:45:43 AM] (Steve Loughran) HADOOP-18382. AWS SDK v2 upgrade 
prerequisites (#4698)
[Oct 14, 2022, 10:46:14 AM] (Steve Loughran) HADOOP-18481. AWS v2 SDK upgrade 
log to not about standard AWS Credential Providers. (#4973)
[Oct 15, 2022, 2:09:05 PM] (noreply) HADOOP-17563. Upgrade BouncyCastle to 1.68 
(#3980) (#5015)
[Oct 17, 2022, 4:34:21 AM] (Ayush Saxena) HADOOP-18493: upgrade 
jackson-databind to 2.12.7.1 (#5011). Contributed by PJ Fanning.
[Oct 17, 2022, 4:53:13 AM] (Ayush Saxena) HADOOP-18360. Update commons-csv from 
1.0 to 1.9.0. (#4928). Contributed by fanshilun.
[Oct 18, 2022, 2:05:08 PM] (noreply) HADOOP-18497. Upgrade commons-text version 
to 1.10.0 to fix CVE-2022-42889.  (#5037).
[Oct 18, 2022, 2:28:55 PM] (Steve Loughran) HADOOP-18476. Abfs and S3A 
FileContext bindings to close wrapped filesystems in finalizer (#4966)
[Oct 19, 2022, 12:08:27 PM] (noreply) HADOOP-18304. Improve user-facing S3A 
committers documentation (#4478)
[Oct 19, 2022, 3:02:36 PM] (noreply) HADOOP-18465. Fix S3A SSE test skip when 
encryption is disabled (#4925)




-1 overall


The following subsystems voted -1:
blanks pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestBPOfferService 
   hadoop.hdfs.server.namenode.TestFsck 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem 
   
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/78/artifact/out/results-compile-cc-root.txt
 [48K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/78/artifact/out/results-compile-javac-root.txt
 [376K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/78/artifact/out/blanks-eol.txt
 [14M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/78/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/78/artifact/out/results-checkstyle-root.txt
 [14M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/78/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/78/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/78/artifact/out/results-shellcheck.txt
 [20K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/78/artifact/out/xml.txt
 [28K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/78/artifact/out/results-javadoc-javadoc-root.txt
 [1.1M]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/78/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 [544K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/78/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 [24K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/78/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 [96K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2022-10-20 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1019/

[Oct 19, 2022, 7:26:47 AM] (noreply) HADOOP-18304. Improve user-facing S3A 
committers documentation (#4478)
[Oct 19, 2022, 1:38:11 PM] (noreply) HADOOP-18189 S3APrefetchingInputStream to 
support status probes when closed (#5036)
[Oct 19, 2022, 11:11:28 PM] (noreply) YARN-11328. Refactoring part of the code 
of SQLFederationStateStore. (#4976)




-1 overall


The following subsystems voted -1:
blanks hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.hdfs.server.namenode.ha.TestObserverNode 
   
hadoop.hdfs.server.federation.router.TestRouterRPCMultipleDestinationMountTableResolver
 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1019/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1019/artifact/out/results-compile-javac-root.txt
 [528K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1019/artifact/out/blanks-eol.txt
 [14M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1019/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1019/artifact/out/results-checkstyle-root.txt
 [14M]

   hadolint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1019/artifact/out/results-hadolint.txt
 [8.0K]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1019/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1019/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1019/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1019/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1019/artifact/out/results-javadoc-javadoc-root.txt
 [392K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1019/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 [612K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1019/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 [124K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Reopened] (HADOOP-18233) Possible race condition with TemporaryAWSCredentialsProvider

2022-10-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-18233:
-

> Possible race condition with TemporaryAWSCredentialsProvider
> 
>
> Key: HADOOP-18233
> URL: https://issues.apache.org/jira/browse/HADOOP-18233
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, fs/s3
>Affects Versions: 3.3.1
> Environment: spark v3.2.0
> hadoop-aws v3.3.1
> java version 1.8.0_265 via zulu-8
>Reporter: Jason Sleight
>Priority: Major
>  Labels: pull-request-available
>
> I'm in the process of upgrading spark+hadoop versions for my workflows and 
> observing a weird behavior regression.  I'm setting
> {code:java}
> spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider
> spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
> spark.sql.catalogImplementation=hive
> spark.hadoop.aws.region=us-west-2
> ...many other things, I think these might be the relevant ones though...{code}
> in Spark config and I'm observing some non-fatal warnings/exceptions (see 
> below for some examples).  The warnings/exceptions randomly appear for some 
> tasks, which causes them to fail, but then when Spark retries the task it 
> will succeed.  The initial tasks don't always fail either, just sometimes.
> I also found that if I switch to a SimpleAWSCredentials and use static keys, 
> then I don't see any issues.
> My old setup was spark v3.0.2 with hadoop-aws v3.2.1 which also does not have 
> these warnings/exceptions.
> From reading some other tickets I thought perhaps adding
> {code:java}
> spark.sql.hive.metastore.sharedPrefixes=com.amazonaws {code}
> would help, but it did not.
> Appreciate any suggestions for how to proceed or debug further :)
>  
> Example stack traces:
> First one for an s3 read
> {code:java}
>  WARN TaskSetManager: Lost task 27.0 in stage 4.0 (TID 29) ( executor 
> 13): java.nio.file.AccessDeniedException: 
> s3a://bucket/path/to/part.snappy.parquet: 
> org.apache.hadoop.fs.s3a.CredentialInitializationException: Provider 
> TemporaryAWSCredentialsProvider has no credentials
>     at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:206)
>     at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3289)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3053)
>     at 
> org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39)
>     at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFooterReader.readFooter(ParquetFooterReader.java:39)
>     at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$lzycompute$1(ParquetFileFormat.scala:268)
>     at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$1(ParquetFileFormat.scala:267)
>     at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:270)
>     at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
>     at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:164)
>     at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
>     at 
> org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)
>     at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.columnartorow_nextBatch_0$(Unknown
>  Source)
>     at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.processNext(Unknown
>  Source)
>     at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>     at 
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
>     at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
>     at 
> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
>     at 
> org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
>     at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>     at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
>     at org.apache.spark.scheduler.Task.run(Task.scala:131)
>     at 
> 

[jira] [Resolved] (HADOOP-18500) Upgrade maven-shade-plugin to 3.3.0

2022-10-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18500.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Upgrade maven-shade-plugin to 3.3.0
> ---
>
> Key: HADOOP-18500
> URL: https://issues.apache.org/jira/browse/HADOOP-18500
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Willi Raschkowski
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: ImmutableMap_hadoop-client-runtime.txt, 
> ImmutableMap_hadoop-shaded-guava.txt
>
>
> Maven-shade-plugin rewrites classes when moving them into {{hadoop-client}} 
> JARs. That's true even when it doesn't actually need to modify the byte code 
> of the classes, say for shading.
> We use a tool that checks for classpath duplicates that don't have equal byte 
> code. This tool flags classes brought in via Hadoop. The classes it flagged 
> came on one side from 
> a JAR containing relocated classes ({{hadoop-client-api}} or {{-runtime}}) 
> and the other from the relocated JAR ({{hadoop-common}} or 
> {{hadoop-shaded-guava}}). We checked and the byte code for the same class is 
> indeed different between the relocated and non-relocated JARs.
> This is because maven-shade-plugin, before 3.3.0, was rewriting class files 
> even when the relocation was a "no-op". See MSHADE-391 and 
> [apache/maven-shade-plugin#95|https://github.com/apache/maven-shade-plugin/pull/95].
> {quote}Maven Shade internally uses [ASM's 
> {{ClassRemapper}}|https://asm.ow2.io/javadoc/org/objectweb/asm/commons/ClassRemapper.html]
>  and defines a custom {{Remapper}} subclass, which takes care of relocation, 
> partially doing the work by itself and partially delegating to the ASM parent 
> class. An ASM {{ClassReader}} reads each class file from the original JAR and 
> *unconditionally* writes it into a {{{}ClassWriter{}}}, plugging in the 
> transformer.
> This transformation, even if not a single relocation (package name mapping) 
> takes place, often leads to binary differences between original class and 
> transformed class, because constant pool or stack map frames have been 
> adjusted, not changing the functionality of the class, but making it look 
> like something changed when comparing class files before and after the 
> relocation process.
> {quote}
> Upgrading to maven-shade-plugin 3.3.0 fixes the unnecessary rewrite of 
> classes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18471) An unhandled ArrayIndexOutOfBoundsException in DefaultStringifier.storeArray() if provided with an empty input

2022-10-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18471.
-
Fix Version/s: 3.4.0
   3.3.9
   Resolution: Fixed

merged into branch-3.3 and trunk; not in 3.3.5 as i'm being more selective 
there right now

> An unhandled ArrayIndexOutOfBoundsException in 
> DefaultStringifier.storeArray() if provided with an empty input
> --
>
> Key: HADOOP-18471
> URL: https://issues.apache.org/jira/browse/HADOOP-18471
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, io
>Affects Versions: 3.3.4
>Reporter: FuzzingTeam
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> The code throws an unhandled ArrayIndexOutOfBoundsException when method 
> _storeArray_ of DefaultStringifier.java is called with an empty array as 
> input.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18156) Address JavaDoc warnings in classes like MarkerTool, S3ObjectAttributes, etc.

2022-10-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18156.
-
Fix Version/s: 3.3.9
   Resolution: Fixed

> Address JavaDoc warnings in classes like MarkerTool, S3ObjectAttributes, etc.
> -
>
> Key: HADOOP-18156
> URL: https://issues.apache.org/jira/browse/HADOOP-18156
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.2
>Reporter: Mukund Thakur
>Assignee: Ankit Saurabh
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>
> {noformat}
> home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:856:
>  warning: empty  tag
> [ERROR]* 
> [ERROR] ^
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:150:
>  warning: empty  tag
> [ERROR]* 
> [ERROR] ^
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:964:
>  warning: no @param for source
> [ERROR] public ScanArgsBuilder withSourceFS(final FileSystem source) {
> [ERROR]^
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:964:
>  warning: no @return
> [ERROR] public ScanArgsBuilder withSourceFS(final FileSystem source) {
> [ERROR]^
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:970:
>  warning: no @param for p
> [ERROR] public ScanArgsBuilder withPath(final Path p) {
> [ERROR]^
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:970:
>  warning: no @return
> [ERROR] public ScanArgsBuilder withPath(final Path p) {
> [ERROR]^
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:976:
>  warning: no @param for d
> [ERROR] public ScanArgsBuilder withDoPurge(final boolean d) {
> [ERROR]^
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:976:
>  warning: no @return
> [ERROR] public ScanArgsBuilder withDoPurge(final boolean d) {
> [ERROR]^
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:982:
>  warning: no @param for min
> [ERROR] public ScanArgsBuilder withMinMarkerCount(final int min) {
> [ERROR]^
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:982:
>  warning: no @return
> [ERROR] public ScanArgsBuilder withMinMarkerCount(final int min) {
> [ERROR]^
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:988:
>  warning: no @param for max
> [ERROR] public ScanArgsBuilder withMaxMarkerCount(final int max) {
> [ERROR]^
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:988:
>  warning: no @return
> [ERROR] public ScanArgsBuilder withMaxMarkerCount(final int max) {
> [ERROR]^
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:994:
>  warning: no @param for l
> [ERROR] public ScanArgsBuilder withLimit(final int l) {
> [ERROR]^
> [ERROR] 
> 

Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2022-10-20 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.yarn.sls.TestSLSRunner 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl
 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/diff-compile-javac-root.txt
  [488K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/patch-mvnsite-root.txt
  [568K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/patch-javadoc-root.txt
  [40K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [220K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [436K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt
  [28K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/820/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   

[jira] [Created] (HADOOP-18501) [ABFS]: Partial Read should add to throttling metric

2022-10-20 Thread Pranav Saxena (Jira)
Pranav Saxena created HADOOP-18501:
--

 Summary: [ABFS]: Partial Read should add to throttling metric
 Key: HADOOP-18501
 URL: https://issues.apache.org/jira/browse/HADOOP-18501
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Pranav Saxena
Assignee: Sree Bhattacharyya


Error Description:
At present, SAS Tokens generated from the Azure Portal may or may not contain a 
? as a prefix. SAS Tokens that contain the ? prefix will lead to an error in 
the driver due to a clash of query parameters. This leads to customers having 
to manually remove the ? prefix before passing the SAS Tokens.

Mitigation:
After receiving the SAS Token from the provider, check if any prefix ? is 
present or not. If present, remove it and pass the SAS Token.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org