dongjoon-hyun commented on issue #25805: [SPARK-29082][core] Skip delegation 
token generation if no credentials are available.
URL: https://github.com/apache/spark/pull/25805#issuecomment-533376012
 
 
   FYI, all the failures are recovered and the Jenkins is continuing to the 
next step. If this doesn't hide the other failure during the outage, I guess 
the Jenkins will pass.
   - 
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-maven-hadoop-3.2-jdk-11/443/
   ```
   FileSuite:
   - text files
   - text files (compressed)
   - text files do not allow null rows
   - SequenceFiles
   - SequenceFile (compressed)
   - SequenceFile with writable key
   - SequenceFile with writable value
   - SequenceFile with writable key and value
   - implicit conversions in reading SequenceFiles
   - object files of ints
   - object files of complex types
   - object files of classes from a JAR
   - write SequenceFile using new Hadoop API
   - read SequenceFile using new Hadoop API
   - binary file input as byte array
   - portabledatastream caching tests
   - portabledatastream persist disk storage
   - portabledatastream flatmap tests
   - SPARK-22357 test binaryFiles minPartitions
   - minimum split size per node and per rack should be less than or equal to 
maxSplitSize
   - fixed record length binary file as byte array
   - negative binary record length should raise an exception
   - file caching
   - prevent user from overwriting the empty directory (old Hadoop API)
   - prevent user from overwriting the non-empty directory (old Hadoop API)
   - allow user to disable the output directory existence checking (old Hadoop 
API)
   - prevent user from overwriting the empty directory (new Hadoop API)
   - prevent user from overwriting the non-empty directory (new Hadoop API)
   - allow user to disable the output directory existence checking (new Hadoop 
API
   - save Hadoop Dataset through old Hadoop API
   - save Hadoop Dataset through new Hadoop API
   - Get input files via old Hadoop API
   - Get input files via new Hadoop API
   - spark.files.ignoreCorruptFiles should work both HadoopRDD and NewHadoopRDD
   - spark.hadoopRDD.ignoreEmptySplits work correctly (old Hadoop API)
   - spark.hadoopRDD.ignoreEmptySplits work correctly (new Hadoop API)
   - spark.files.ignoreMissingFiles should work both HadoopRDD and NewHadoopRDD
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to